This book originated as an initiative of the Textual Studies team of a major collaborative research initiative known as INKE: Implementing New Knowledge Environments.[1] INKE research involved a number of interdisciplinary teams aimed at exploring the future of the book from the perspective of its history and the new possibilities afforded by digital technologies. In all cases the work was approached from a humanities perspective, focusing first on human need and practice, rather than on what is technically possible in the new digital medium. The title of this collection of essays, Beyond Accessibility: Textual studies in the twenty-first century, owes much to an observation advanced by Susan Hockey in Electronic texts in the humanities, published coincidentally in 2000, the first year of the twenty-first century. In it Hockey observed that "scholars have begun to use the Web as a new publication medium" and that to that point we were using the Web "as a way of providing access to information rather than analyzing that information" (2001, 9). In the very busy years since 2000, scholars have continued to work on making texts accessible—from obscure manuscripts to born-digital works—but we have also turned our collective attention to a number of avenues of investigation that have expanded considerably the purview of textual studies in the post-digital era, including not just editing of texts but a range of activities that are now understood to be closely allied if not fully entwined with a much enlarged understanding of editing, from the study of reading practices to textual analysis to book history to software development and interface design. Each of these areas of textual studies expertise is represented in the present collection.
We begin from a recognition of the foundational importance of textual studies to understanding and participating in the development of new reading technologies in the digital age (for a fuller theoretical discussion of the role of textual studies in new knowledge environments, see Galey et al. 2012; for a discussion of founding principles of INKE see Siemens et al. 2009). Other textual scholars before us have argued that any effort to influence the book's future cannot afford to neglect its past (Chartier 1995; Kirschenbaum 2008; Mak 2011; McKenzie 1999; McGann 2001; Nunberg 1996). Early in the life of the INKE initiative, its Textual Studies team took "beyond remediation" as its theme, exploring the unfolding history of migration through paper-based media into the digital era and the implications of this migration for the development of new reading environments (Galey et al. 2012). "Remediation," we think, has been necessitated by the accessibility noted by Hockey, and thus we see it as a logical next step to move "beyond accessibility." Together, these two conceptual models, remediation and accessibility, characterize much of the first wave of digital textual studies. Initially it was revolutionary to see the artefacts of textual history released from print and liberated into the malleable, manipulable media of the digital age. A key outcome of this remediation has been the ongoing revolution in accessibility that began with the ability to turn pages into pixels, bits, and bytes, and transmit them electronically. This revolution really took off with the explosion of the World Wide Web at the end of the twentieth century. Just over a decade later we could simply take for granted that digital processes infused every step of our generation, editing, dissemination, and study of texts, and we assumed access not only to images of documents scattered across the globe, but also, in many cases, to keyed and encoded versions of these documents.
In the present collection of essays, we move beyond the simple but profound fact of this new accessibility to ask and to begin to answer the question, what next? Where are textual studies moving in the twenty-first century? And just what do we mean by textual studies? Through much of the twentieth-century, the answer might have been relatively simple: it probably would have been defined with reference to scholarly editing, in particular the work of establishing an authoritative text based on the forensic study of surviving documentary witnesses of a particular work. It was largely a process defined by the imagined end product, a relatively clearly demarcated thing called a scholarly edition. Now, in the early twenty-first century, that formulation has changed, with profound effect. We accept the accuracy of David Greetham's assertion that "Textual scholars study process (the historical stages in the production, transmission, and reception of texts), not just product (the text resulting from such production, transmission, and reception)" (Greetham 1994, 2; emphasis in original). When we place the emphasis on process, the relationship becomes more visible between scholarly editing, on the one hand, and book history, material bibliography, and the transmission and preservation of texts on the other (for a fuller discussion of the relationship between textual criticism and book history, see Van Mierlo 2007, and Howsam 2006, esp. 8-27). With the new affordances of digital texts, textual scholarship must also involve some consideration of new kinds of reading and engagement with text, from text analysis to distant reading.[2] And in response to these new ways of reading and engaging with texts, textual scholars now involve themselves in the development of tools and reading environments to support these new activities and processes. So, then, the evolving field of textual studies, while still centrally concerned with scholarly editing, also significantly incorporates book history, the history of reading, text analysis, and, increasingly, information studies and interface design among the products and processes with which it is concerned.
The crucial development in this shift of focus from product to all aspects of process is the digital turn. In the 1980s textual scholarship saw in what was then called humanities computing new and powerful means for improving access to all varieties of texts (for examples, see esp. Hockey 2004, McKenzie 1999, and Shillingsburg 1986). This focus on accessibility made such good sense that the majority of those concerned with any aspect of textual production, preservation, or consumption allowed it to fill their field of vision for the following two decades. This focus on accessibility enabled us to see that the digital shift was revolutionary in the way that the printing press was: radically changing the scale of production and dissemination of text. Accessibility collapsed geo-political barriers to information, allowing anyone with an internet connection to locate and view documents they might otherwise never see.[3] The impact has been profound in education, giving undergraduate as well as graduate students access to rare and specialized documents. The digital turn even closed economic gaps (albeit imperfectly), providing even small and comparatively poor institutions access to specialist material—thus, Susan Hockey's turn-of-the-century recognition, noted above, that "much of the present interest in electronic texts is focused on access" (Hockey 2001, 3). But accessibility meant more than just the ability to see texts. It meant being able to interact with text and texts in new ways. For this reason, the digital turn demands consideration of both the history and future of reading and, in the present, active development of reading tools and processes and the affordances that make the most of the new possibilities of the digital text.
Textual studies have been profoundly affected by the affordances of the digital text. One of the most immediately appreciated capabilities of a machine-readable text is its searchability and, by extension, processability. And thus textual studies are now concerned with computer-assisted reading and analysis of digital texts, what Jean-Guy Meunier calls CARAT for short (see Meunier 2009; on text analysis, see Hoover and Lattig 2007, and Hoover, Culpeper and O'Halloran 2014). CARAT, in many of its aspects, is premised on the searchability of the text, that is, the ability of the computer to locate strings of text more quickly and with greater reliability than the human eye can, and to locate strings that might otherwise elude the human reader altogether. These capabilities are therefore also the carrot, the enticement to editing (understood as scholarly preparation of texts) in the digital medium. Indeed, textual scholars have come to embrace Jerome McGann's prefatory remark in Radiant textuality that "the general field of humanities education and scholarship will not take the use of digital technology seriously until one demonstrates how its tools improve the ways we explore and explain aesthetic works" (McGann 2001, xii). An early effort to apply digital tools to the analysis of text—TACT (Text Analysis Computing Tools)—was undertaken in the mid-1980s by John Bradley, Ian Lancashire, and others at the University of Toronto. A next-generation attempt to bring text analysis into the scholarly mainstream was the Text Analysis Portal for Research (TAPoR), led by Geoffrey Rockwell and others, which has come to include Stéfan Sinclair and Rockwell's Voyant Tools, "a web-based reading and analysis environment for digital texts" (Sinclair and Rockwell 2017).
Along with these new approaches to reading, texts themselves are being re-cast into new forms: that of the corpus and that of the digital archive. While the study of a corpus of texts is not new, emerging digital methods of distant reading enable a scholar to treat an entire corpus as, in significant ways, a single text. For example, the growing corpus being produced through the Text Creation Partnership in connection with Early English Books Online (EEBO) from whole libraries of machine-readable documents could conceivably be treated as in effect a single text as scholars turn their attention to, and develop more tools intended to make use and sense of what to us is Big Data (see EEBO 2016, and also Text Creation Partnership 2016). At present there is a tentative relationship between corpus analysis and what we have come to think of as textual studies. On the one hand, corpora are typically not prepared (i.e. edited) in the way texts have been in the long tradition of scholarly editing, but at the same time, large corpora present new possibilities for textual study, which has always been the ultimate purpose of scholarly editing. The digital scholarly archive has more easily found a place at the centre of textual studies. The scholarly edition remains an important outcome of textual scholarship, but because we now understand the edition as a process and as the performance of an argument arising out of expert analysis of all relevant aspects of the textual tradition of a work of literature—through all of its material instantiations—provision of access to an archive of at least some of those instantiations (themselves subject to scholarly preparation) is increasingly seen as a necessary complement to the scholarly edition. By recognizing, through this emphasis on process, that every edition is but one argument, one representation of the available data, it follows that special place must be given to the archive, the site where the raw materials are provided for any number of arguments and any number of processes: from quotation to text analysis to the making of new editions for particular purposes (see Gabler 2010 on the scholarly edition). The digital archive can provide access not only to any iteration, any instantiation of a work, but by making available the materials that provide traces of composition, publication, and dissemination practices, the archive can also be said to provide the necessary material for all potential manifestations of a work. The digital archive, then, is an essential tool for both textual scholars and book historians as well as those who would do new and previously unimagined things with these materials. Here too the traditional divide between the archivist preserving and providing access to research materials and the researcher producing scholarly outcomes using those materials begins to dissolve. The digital archive is now one of the expected results of any major editorial project. Scholars, and increasingly funding agencies, are dissatisfied with major editorial projects that lack at least some portion of an accompanying archive. We expect not only the finished edifice of a new scholarly edition, but also the materials themselves, so we can be architect and builder of our own edition.
Beyond being searchable and processable, electronic texts are also eminently malleable and transformable, qualities that have also had a shaping effect on textual scholarship. Being malleable means a text is also always improvable, and being easily transformable, it is also reusable in ways that are impossible when text is locked in print. These qualities, too, lead to new kinds of accessibility. As a result, the archive can always and with relative ease grow and improve through correction and augmentation. An important facilitator of the augmented text is the TEI (Text Encoding Initiative), which provides guidelines for developing schemas for use in XML markup of text (see TEI 2016). XML (eXtensible Markup Language) is an encoding language for creating semantically enhanced documents by naming its constituent parts: smart documents that declare their own data structure. In this form, other Web-based languages, such as CSS and XSLT, can do things with the encoded text: they can, for example, tell a processor to render a section of text marked <quote> either as an indented block quote, or wrapped in quotation marks, or in green text (if one should so choose); or to extract all quotations and report them in a list. The same document can be expressed, manipulated, queried, and transformed in any number of ways.
An important corollary to the digital turn is the turn to materiality. Facsimile reproductions in print, which became a standard form in the mid- to late-twentieth century, were largely dedicated to increasing accessibility to rare materials of scholarly interest, particularly early printed books, but there was an added interest in seeing the works of canonical authors in their original forms. (Scolar Press [Menston, Engl.] and Da Capo Press/Theatrum Orbis Terrarum [Amsterdam] are notable examples, as well as Scholar's Facsimiles & Reprints [Delmar, NY], and, on a smaller scale, the facsimile reproductions of Kent State University Press specialized in literary works.) On a much larger scale, Eugene Power began a project in the 1930s to microfilm the STC titles in the British Museum; these images are now available in digital form through EEBO (for a short account of this history of EEBO see "About EEBO" ). With the advent of the facsimile, scholars and students had readier access to the original forms of the works they studied. This material turn was revolutionized first by the digital image file and then by the World Wide Web. But while Internet archives and databases—some commercial and others open access—deliver large scale access to entire libraries of primary documents in facsimile, there remains the need for carefully prepared (i.e. edited) and focused archives, with high quality images of pertinent primary materials. In some respects, the digital archive is a manifestation of the unediting movement espoused by Randall MacLeod and others who argue that the proper object of scholarly study is the individual instantiation of a text (see for example: http://linkeddata.org/ and http://www.w3.org/standards/semanticweb/data). But the digital archive, bearing all manifestations and iterations of the text (and related media), is also essential for the work of comparison that is so central to critical editing (see McGann and Buzetti 2006, esp. 70). The digital image is an important factor in this privileging of the document as a manifestation of a text, but so too is the TEI. As a recognized international standard, the TEI guidelines are now central to textual studies, and their descriptive focus ensures a continuing place for material bibliography. The TEI seeks, in the first instance, to provide an interchangeable, standard language for describing documents. Its guidelines are set up to describe what is seen on the page, focusing on the structure and format of the text and its material manifestation, rather than its intellectual content—although there is a great deal of overlap here, and some contention as to the adequacy of the TEI as a mechanism for description. (On the use of the TEI for describing the physical document, see Cummings 2012 in Nelson and Terras' Digitising Medieval and Early Modern material culture; on critiques of the TEI for describing documents, see Jon deTombe's chapter in the present collection.)
The digital turn has also fostered what we might call the social turn. There is a new sociability in textual studies—both in application to texts and to those who produce and study them—that arises out of the digital state of the text. In the 1980s D. F. McKenzie coined the term "the sociology of texts," bringing within the purview of bibliography the social structures and processes involved in the production and transmission of text. (It should be noted that this notion was not exclusive property of the bibliographer at this time; see for example Marotti 1986.) Texts are embedded, not only materially in the context of the documents that manifest them, but socially in the lives of those who produce, transmit, or consume them: in short, in the life of nearly everyone who lives in a literate society. Jerome McGann takes up from McKenzie the notion of the "social text," arguing that the digital archive is the natural medium for giving primacy to the physical object as a product of social processes and presenting it in relation to other instantiations (McGann 2006 and McGann 2001, 21). This is especially true of what John Bryant calls the "fluid text," the text whose tradition bears marks of variance in its transmission history (Bryant 2002). At the same time, hypertext gave new expression to the inter-textual nature of documents. More recently, linked data have begun to emerge as a new desideratum of textual studies, promising for texts a networked future such as people now enjoy on Facebook (see for example: http://linkeddata.org/ and http://www.w3.org/standards/semanticweb/data). But there is also sociability of another kind. In a context where the large digital archive is a central endeavour, textual studies have developed from an often solitary practice to a collaborative one. With the emergence of crowd sourcing, scholars are imagining new possibilities for resourcing large-scale editing projects. The social edition, newly theorized by Ray Siemens (INKE), Peter Robinson (see chapter 7 in this volume), and others, is social in both respects, both flowing from the work of McKenzie, McGann, Peter Shillingsburg, and others who insist that we regard text as embedded in social relations, as existing within a context of diverse realized and potential manifestations, but also as arising from a network of human agents responsible for the production, dissemination, and consumption in a networked environment.
One profound implication of the digital turn in textuality, implied in the wide range of activities identified above, is what we might call "thorough agency." Editors now shoulder many of the responsibilities of publication, of putting content into a form that meets the needs of users. In the digital medium, to edit is in many cases to publish, and not just to publish, but often to serve as developers of the medium of publication. The days when a scholar would simply hand off a typescript of a book, article, or edition and receive galleys in return are long gone. From the advent of the word processor, to desktop publishing, and now the age of open access with XML, XSLT, and HTML—especially as it pertains to primary materials—scholars have become increasingly involved in publishing their own materials, even if it is through an academic press. As a consequence, textual scholars are shapers of the next, emerging forms of publication: we are the printing presses, type cutters, and printers of our age. In the most simple case, this might mean the development of a webpage. In more extreme cases, this might mean participating in or even initiating software development. Those involved in the production of edited text often find themselves pushing the development of platforms that can adequately support the requirements of their resulting editions. They have also become actively involved in the development of software to enable the editing process, from collation tools to, more recently, transcription tools (Newman and Wellner 2002; in architecture, see Eliot and Hearst 2002 and Krüger et al. 1995).[4] Thinking about the medium and our various ways of interfacing with it forces us to consider new ways of doing textual studies.
The turn to the materiality and sociability of the text has brought attention to the environment in which texts are found, not just the space between the covers, but the space of the shelf, the desk, the library, and now, the World Wide Web. While the nineteenth and twentieth centuries saw significant development and innovation in the way that texts and textual histories were presented (e.g. the apparatus of the variorum edition), the malleability of the digital medium enables experimentation, modelling, and prototyping without the urgency of implementation that the economics of print bring to bear. Work on the digital reading environment has, of course, been going on for a long time. The electronic environment (i.e. interface) has been the subject of research since at least the early 1990s in such diverse knowledge domains as architecture, engineering, and medicine, but the traditional text-based domains of the humanities have received little if any consideration in computer science. So humanities scholars have, perforce , taken up this work. For textual scholars, developing new interfaces also means reimagining and redefining the very form of the book: how it looks and how it functions in the digital medium. It means exploring new forms for representing and analyzing textual information.
This book of essays, then, is about moving beyond mere accessibility, where we have unprecedented and immediate access to documents, delivered to our computers through the World Wide Web. No longer having to travel through space and time to our documents, we now have our documents come to us. This is a revolutionary starting point for considering how we might move forward into new kinds of accessibility, and indeed for textual scholars to consider what lies beyond this first great boon of the digital era for textual studies. Increasing accessibility may be the dominant narrative of the digital turn, but it must not be the only one. Is it even possible to ensure accessibility in an electronic docuverse that is rapidly expanding but also, in some respects, fragile and ephemeral? What happens to texts and textuality and to reading practices as the means and modes of access change? What new opportunities do we see before us for enhancing accessibility? How far have we come since this first watershed moment of accessibility? And what else is there in our new methods and approaches to textual studies besides mere accessibility?
The chapters that follow offer some answers to the question of what lies beyond accessibility in textual studies. The first three chapters have in common a breadth of scope that in each case extends over a century or more of human history. In the first, Adriaan van der Weel asserts the importance of considering ongoing changes in reading practices as we imagine the future of digital textuality. He marks the emergence of homo typographicus into the digital "docuverse," arguing that digital textuality is moving beyond accessibility and "is now coming into its own," and "that just as long centuries of books and printing have conditioned us to read in new ways, our screens are now set to condition us to read in the particular way suggested by the inherent characteristics of the computers that drive them." Van der Weel's argument is that the reading-derived conditioning he takes as his topic is fundamentally determinative not merely of the practices and thinking of textual scholars but of humanity as a whole. In the next chapter, Sydney Shep illustrates recent developments in bibliography and textual studies by providing a case study on the use of digital tools and processes to explore and extend McKenzie's foundational work on the sociology and materiality of texts. She argues for a new understanding of "historical materialism" as less a Marxist concept and more as a way of foregrounding the material (re-)turn in history and textual studies. In so doing, Shep straddles both the print and digital world to exemplify the multi-modal and polyvalent nature of textual scholarship in the twenty-first century. Dean Irvine continues in the sociological-materialist vein in his examination of the new media collaboratory, arguing that the digital turn has resulted in an unacknowledged repetition of the experimentation he suggests is a hallmark of the twentieth century Modernist movement in literature and art. Irvine sees in the spirit, practice, and collaborative quality of the digital humanities what he and others have already noted in the "avant-garde labs of the modernist period." In the fourth essay, less broadly but still offering comment on the material turn, Christoph Bläsi looks to the future of book history, asking, "Will our children have the chance to do research on today's digital books?" Bläsi's inquiry is an exploration of the difficulties that future book historians may face when attempting to locate and access archival materials related to the processes and products of today's digital books. Ebook formats, the embededness of ebooks and etexts in "book apps," restrictive digital rights management (DRM) measures, and undocumented modification all pose problems that need to be addressed by current and emerging practices for the preservation of digital books to enable future research in histories of the book.
Shifting from the material turn to the issue of "thorough agency" in scholarly editing, Jon deTombe argues that one of the most important affordances of XML markup for textual studies is its support of the iterative process of text-production in a cycle of interpretation and representation. He demonstrates how iterations of a text are read and understood in comparison with previous and subsequent iterations. The iterative use of XML markup with XSLT transforms, deTombe argues, enables a system that acknowledges and engages the ambiguity of language and the autopoietic nature of text. In a chapter on a major initiative to publish the complete works of Dutch writer Willem Frederik Hermans, Peter Kegel describes the various steps taken by the editorial team responsible for creating a definitive text designed for both a specialist and a popular audience. The project Kegel describes exemplifies the thorough agency of digital editors who experiment with new forms for expressing textual content, always with a view to meeting the needs of readers, howsoever they be defined.
The next three chapters address the social turn in textual studies. If the archive can be said to provide the necessary material for all potential iterations of a work, we are shown by these essays that social editing might be said to promise as many ways to use that material as there are editors and scholars willing to work with them. Peter Robinson's chapter on "Social editing" works toward a definition and taxonomy of the social edition by surveying and critiquing recent projects and practices. Robinson grapples with the fluidity of our understanding of social texts, social editing, and social editions to offer a basis for understanding these three dynamic conceptions and how they apply in current practice. Continuing on the theme of social editing, Yin Liu calls to our attention current examples of crowdsourcing before reminding us that this approach to large-scale editing projects is far from new. She argues that the first true crowd-sourced project was the Oxford English Dictionary and that the difficulties faced by its editors and the eventual, hard-fought success of that project should be seen as instructive for large-scale collaborative projects undertaken with digital tools. Looking to facilitate a social model for editing, James Smith and Raffaele Viglianti describe a new environment for editing, publishing, and studying digital facsimiles that is "openly addressable and shareable." Focusing on the editing of manuscripts in facsimile form, Smith and Viglianti present their "Shared canvas model" as a means for theorizing and modelling the way in which the elements of a manuscript, whether at the codex level or on the page, relate to each other, without having to commit to any single representation of these relationships. Based on a linked open data (LOD) model, this way of representing the structure of the document not only enables the tailoring of multiple representations of the document and related materials but also facilitates a collaborative, modular approach to scholarly editing.
The next series of essays consider the role of the textual scholar in the design of new reading environments. Brent Nelson's chapter on the "textual habitat" takes an ecological metaphor as a lens for theorizing the factors and dynamics involved in the development of reading environments. Taking the textual scholarship of the Bible as an historical example, Nelson argues that successful reading technologies must grow out of the DNA of the text and its hermeneutical context and that the health of these reading technologies depends on a reading environment that matches the motivation of readers with bibliographic affordances. The value of good design is the focus of Scott Schofield's analysis of three new reading environments: New Radial, The Dynamic Table of Contexts, and Bubblelines. These tools have been designed to facilitate digital reading with the particular needs of scholarly readers in mind. They are adaptable to individual needs and circumstances, enabling readers to move beyond accessibility to modify and curate their own reading experience; and while each facilitates a purpose different from the others', the development of all three is informed, as Schofield notes, by careful consideration of the long continuum of textual communication. The next two chapters examine and evaluate instances of digitally-remediated primary resources. Laura Estill and Michelle Levy review a variety of available digital resources that to varying degrees enable and enhance the study of handwritten literary and historical documents by women. As these authors note, there is need for theoretically-informed critique of the content and design of such resources, as such critique is scant. Their work takes a step toward filling this lacuna. Estill and Levy suggest that digital editions are the future of manuscript studies, but this optimistic outlook is tempered by recognition that it is also necessary to better understand how digital manuscripts will and can be used. In the next chapter, Sondheim et al. provide a finely cut theoretical lens through which to examine two cases where editors produced both print and digital expressions of their scholarly edition. By identifying instances where the editors found advantages in one medium over the other, these authors remind us that as writers and editors move more fully into the digital realm, they are still well advised to keep in mind the homo typographicus introduced by Van der Weel. In the final essay in this collection, Allison Muri advances McKenzie's thinking on the sociology of texts by using digital mapping technology to integrate facsimiles, texts and markup, and database entries to create a new form of scholarly edition in her Grub Street Project, where the map is the primary interface for studying the literary production of eighteenth-century London. Muri demonstrates how the Grub Street Project itself functions as a scholarly edition of literary London, providing ways for reading texts through visualizations of the people, places, and events the project "contains."
No single collection of essays can hope to answer fully the question of what lies beyond for (digital) textual studies, but we offer this collection as both an assembly of post-accessibility thoughts by leading and emerging textual scholars and as a spring-board for further thinking about what may come next. In assembling this collection, we have tried to emulate the shifting shape of textual studies itself in the digital age, from the high-level theorizing of Van der Weel's chapter on homo typographicus to the praxis of careful preparation of texts anticipated by Smith and Viglianti and illustrated by deTombe and Kegel, from the historically-informed case studies provided by Shep, Irvine, and Liu, to the methodological investigations offered by Robinson and by Sondheim et al, from discussions of new reading environments found in the work of Nelson, Muri, and Schofield et al, to the theoretical critiques of our current and future state of digital reading materials presented by Bläsi and by Estill and Levy. As this brief summary suggests, alternative arrangements of these chapters and of the issues and trends in textual studies are available. Thus, it seems to us that a digital edition of Beyond accessibility: Textual Studies in the twenty-first century is the most sensible way to present this work to the world. We hope both the form and content of this book give rise to new imaginings of the shape of textual studies in the twenty-first century.
[1] INKE was funded by the Canada's Social Sciences and Humanities Research Council (SSHRC). For a history of INKE and its many facets of research see inke.ca. The editors would like to acknowledge the generous support of SSHRC and INKE's several partners.
[2] "Text analysis" commonly describes a more or less statistical approach to analysing the words used in one or more texts. This approach to studying text(s) is made possible by the computer's powerful ability to gather and sort data (cf. Archer 2009, 1-15). "Distant reading" describes a more or less statistical approach to literary history; one that focuses on publication data rather than verbal content to reveal how literature has changed over time and, more deeply, any patterns inherent in those changes (see Moretti 2005).
[3] It must be acknowledged, however, that inequities in infrastructure limit accessibility in some parts of the world. For an initiative directed at addressing such barriers to global participation and collaboration in digital scholarship, see GO::DH.
[4] Gary Stringer's DV-COL and Peter Robinson's Collex were developed in the 1980s; more recent developments include The Versioning Machine, Juxta, and CollateX (an ongoing development of Collex). Transcription tools include T-PEN at St. Louis University, the Textual Communities Workspace at the University of Saskatchewan, and eLaborate at the Huygens Institute to name only a few of the tools under development.
Archer, Dawn. 2009. "Does frequency really matter?" In What's in a word-list? Investigating word frequency and keyword extraction, edited by Dawn Archer, 1-15. Farnham, UK: Ashgate.
Bryant, John. 2002. The fluid text: a theory of revision and editing for book and screen. Ann Arbor: University of Michigan Press.
Chartier, Roger. 1995. Forms and meanings: Texts, performances, and audiences from codex to computer. Philadelphia: University of Pennsylvania Press.
Cummings, James. 2012. "The materiality of markup and the Text Encoding Initiative." In Digitising Medieval and Early Modern material culture, edited by Brent Nelson and Melissa Terras, 49-81. Toronto and Tempe: ITER and the Arizona Center for Medieval and Renaissance Studies.
Early English Books online (EEBO). 2016. [Home]. Accessed August 10. http://eebo.chadwyck.com/home.
Elliott, Ame and Marti A. Hearst. 2002. "A comparison of the affordances of a digital desk and tablet for architectural image tasks." International Journal of Human-Computer Studies 56.2: 173-197.
Gabler, Hans Walter. 2010. "Theorizing the digital scholarly edition." Literature Compass 7.2: 43-56.
Galey, Alan, Richard Cunningham, Brent Nelson, Paul Werstine, Ray Siemens, and the INKE Research Group. 2012. "Beyond remediation: The role of textual studies in implementing new knowledge environments." In Digitising Medieval and Early Modern material culture, edited by Brent Nelson and Melissa Terras, 21-48. Toronto and Tempe: ITER and the Arizona Center for Medieval and Renaissance Studies.
Greetham, David. 1994. Textual scholarship: An introduction. New York: Garland.
Hockey, Susan. 2001. Electronic texts in the humanities: Principles and practice. Oxford: OUP.
---. 2004. "The history of humanities computing." In The Blackwell companion to digital humanities, edited by Susan Schreibman, Ray Siemens and John Unsworth. Oxford: Blackwell. http://www.digitalhumanities.org/companion/.
Hoover, David L, and Sharon Lattig. 2007. Stylistics: Prospect & retrospect. Amsterdam: Rodopi.
Hoover, David L, Jonathan Culpeper, and Kieran O'Halloran. 2014. Digital literary studies: Corpus approaches to poetry, prose, and drama. New York: Routledge.
Howsam, Leslie. 2006. Old books and new histories: An orientation to studies in book and print culture. Toronto: University of Toronto Press.
Kirschenbaum, Matthew G. 2008. Mechanisms: New media and the forensic imagination. Cambridge, MA: MIT Press.
Krüger, Wolfgang, Christian-A. Bohn, Bernd Fröhlich, Heinrich Schüth, Wolfgang Strauss, and Gerold Wesche. 1995. "The responsive workbench: A virtual work environment." Computer 28.7: 42-48.
Marotti, Arthur F. 1986. John Donne, coterie poet. Madison, WI: University of Wisconsin Press.
Mak, Bonnie. 2011. How the page matters. Toronto: University of Toronto Press.
McLeod, Randall. 1981/1982. "Un 'editing' Shak-speare." SubStance 10/11.33-34: 26-55.
McGann, Jerome. 2001. Radiant textuality: Literature after the World Wide Web. New York: Palgrave.
---. 2006. "From text to work: Digital tools and the emergence of the social text." Romanticism on the Net 41–42.
McGann, Jerome and Dino Buzzetti. 2006. "Critical editing in a digital horizon." In Electronic textual editing, edited by Burnard, O'Keeffe, and Unsworth, 53-73. New York: MLA.
McKenzie, D. F. 1999. Bibliography and the sociology of texts. Cambridge: Cambridge University Press.
Meunier, Jean-Guy. 2009. "CARAT–Computer-Assisted Reading and Analysis of Texts: The appropriation of a technology." Digital Studies / Le champ numérique 1.3. Accessed October 15. http://www.digitalstudies.org/ojs/index.php/digital_studies/article/view/161.
Moretti, Franco. 2005. Graphs, maps, trees: Abstract models for literary history. London and New York: Verso.
Nelson, Brent, and Melissa Terras, eds. 2012. Digitising medieval and early modern material culture. Toronto and Tempe: ITER and the Arizona Center for Medieval and Renaissance Studies.
Newman, William, and Pierre Wellner. 1992. "A desk supporting computer-based interaction with paper documents." In Proceedings of the SIGCHI conference on human factors in computing systems, 587-592. Monterey, CA: ACM.
Nunberg, Geoffrey, ed. 1996. The future of the book. Los Angeles: University of California Press.
Shillingsburg, Peter. 1986. Scholarly editing in the computer age: Theory and practice. Athens, GA: University of Georgia Press.
Siemens, Ray, Claire Warwick, Richard Cunningham, Teresa Dobson, Alan Galey, Stan Ruecker, Susan Schreibman, and the INKE Team. 2009. "Codex Ultor: Toward a conceptual and theoretical foundation for new research on books and knowledge environments." Digital Studies / Le Champ Numérique 1.2. Accessed June 16, 2016. http://www.digitalstudies.org/ojs/index.php/digital_studies/article/view/177.
Sinclair, Stéfan and Geoffrey Rockwell. "Voyant: See throught the text." Voyant tools. Accessed February 11, 2017. https://voyant-tools.org/.
TEI.2016. "TEI: Text Encoding Initiative." Accessed August 21. http://www.tei-c.org/index.xml.
Text Creation Partnership (TCP). 2016. "EEBO-TCP: Early English Books Online." Accessed August 10. http://www.textcreationpartnership.org/tcp-eebo/.
Van Mierlo, Wim. 2007. "Introduction." In Textual scholarship and the material book. Variants. Vol. 6, 1-12. Amsterdam: Rodopi.