Introduction

In keeping with the speed at which change comes upon us in the digital world, the term “born digital” developed a second meaning within four years of first being used to describe artefacts that, to quote Marcia Stepanek, “never...existed on paper”. Stepanek’s meaning is the one with which educators and scholars are probably most familiar. In this use, “born digital” describes a work that is not, in Bolter and Grusin’s term, remediated from print (44-62). That is, it describes a human creation that originates in the digital realm, one created in and for cyberspace, as opposed to one that was born in print culture, created for life on paper. But by 2002, Wired magazine had renovated the meaning of born digital into a signification that may be more familiar still to administrators than to teachers in higher education. But this time the administrators seemed to be on to something; the sub-title of the Wired article is “Children of the Revolution”, and if we think of born digital as being a reference to an emerging demographic of readers, we open the door to new ways of thinking, and thence to new ways of teaching. Born digital in this second, newer sense describes not human creativity, but human beings. As of 2009, few of those who constitute the primary readership of this paper would have been born digital. But those who have taught in a first-year university or college classroom within the past year or two will certainly have encountered students born digital; that is, people born into a world in which the computer has been no less a constant feature of their lives than school, television, or radio.[1]

The recent publication of Gary Small and Gigi Vorgan’s iBrain: Surviving the Technological Alteration of the Modern Mind registers the evolution of the born digital metaphor to its logical extension as a metaphor of citizenship. Small distinguishes between “digital natives” and “digital immigrants”, and by using the ink-on-paper book as his primary means of presentation, he makes it clear that his primary audience is digital immigrants, those of us born into print culture but still intellectually and professionally active in digital culture (3). For Small, the digital native is someone who has “never known a world without computers, twenty-four hour TV news, Internet, and cell phones”, while digital immigrants are those of us “who came to the digital/computer age as adults but whose basic brain wiring was laid down during a time” before the features of the digital native's world became commonplace (3). In a similar vein, in Proust and the Squid: The Story and Science of the Reading Brain, Maryanne Wolf reminds us that reading is not a natural characteristic of the human brain but an acquired trait, one that will evolve—and take the brain along with it—as we all, digital immigrants as well as digital natives, become inclined to view the digital age as the natural world in which we exist. In Grown-up Digital: How the Net Generation is Changing your World, Donald Tapscott articulates a worldview similar to Small and Vorgan’s when he distinguishes between those who have come of age in the digital/computer age and those of us who remember different ways of being and of interacting with a world that was itself different in very important ways.

One of the ways ‘our’ world differed from the current reality is in the delivery of education. Computers open many new avenues for education, but it is important not to overlook the common message in Wolf's, Small's, and Tapscott's books: neither reading nor the codex is natural, and most importantly, the human brain's plasticity needs to be taken into account as educators prepare to deliver information and education to digital natives. This is a neuro-scientifically informed re-articulation of McLuhan's famous dictum that “the medium is the message” (2) that runs parallel to the arguments made by Willard McCarty that we need to “discover, confirm and exemplify how computing affects analysis” and “explore the realization of scholarly forms in the digital medium, attempting both the historical achievements of print... and the potential of the digital medium to implement new theoretical designs” (220). The point here is not that we can use “the historical achievements of print” in digital media to teach digital natives about print and book history, but that we must, so that they will be effectively educated for analysing media and the messages they communicate.

The purpose of this paper, therefore, is to offer one example of how digital media can be employed for the sake of teaching digital natives about 1) the difficulties of researching a print artefact, and 2) some of the original difficulties of producing a print artefact. In this way students will develop new analytical capacities as they grow to understand that they are not the first and are far from the only generation to boast with some veracity of the technological achievements of their time.

As Maryanne Wolf notes, our “unfolding intellectual evolution ... [is a] story [that] is changing before our eyes and under the tips of our fingers” (4). When educators turn to digital technologies to teach book history and bibliography to digital natives, we do so not for purposes of preservation, but for purposes of revelation, explication, and, ultimately, analysis. Alternatively, if we set about remediating a printed codex into a digital artefact for an audience of those born into any part of Gutenberg’s long-lived print culture, our primary concern will be the preservation of the known and familiar. What we create in that case must not violate the user’s sense of what a book is and what its parts are, but we need not tell the whole story: we can assume the user has a shared history of more or less similar book use. The terms chain line, watermark, leaf, and possibly even gutter will likely have been heard before, even if the student reader cannot point to these physical features in early modern books because, in the first two instances, she has never seen them in the books she commonly reads, and, in the last instance, we are dealing with specialized and rarely used terminology. [2] But it is in the gaps between pieces of evidence, in the act of reasoning itself, where the analytical faculty is pushed into action, and so those not so familiar with the codex form as to have rendered it virtually invisible for most of their lives are actually in a favourable position to learn how to analyze and how to reason – in short how to think – from the study of a digitized book.

If we set about remediating a book into a digital artefact for digital natives, our primary concern will no longer be preservation; rather, it will be representation of the (ironically) new and unfamiliar.[3] Students of the print era encounter the book as a practically invisible medium. But students of the digital era who encounter the book as new, unfamiliar, and somewhat awkward will confront an error McCarty suggests humanities computing brings to the fore. That error is “the concealed assumption that solving a problem is the end of the matter that generated it” (4). It should always be remembered that just as “we were never born to read” so there is nothing natural about the codex, no matter how the past several centuries of our relationship with that form lead us to feel (Wolf 3). The very history of the technologies for generating print artefacts would constantly remind us of that were it not for the invisibility their success has conferred on them. Teaching book history and bibliography to digital natives requires us to re-present the absent printed artefact. In so doing, by the physical act of “making strange” that with which we are overly familiar, we renew our own perception of the codex form, in a manner analogous to the “ostranenie” or “defamiliarization” advocated by Russian formalists such as Viktor Shklovsky for literary studies (see Jameson 50-4). Thus, through learning bibliography and print history via digital media, the digital immigrant enacts the discovery, confirmation, and exemplification of analysis that humanities computing enables (McCarty 220). Here it is worth recalling Katherine Hayles’s observation that the digital revolution has made the book available for consideration “as a physical object” in ways it formerly had not been (269). It provides a further opportunity to recall the absence, and hence the unfamiliarity to modern readers, of book qualities that were commonplace to readers in the long-lived hand press era: qualities such as paper that is not uniform in shape, thickness, or texture; deckled edges; pages not yet separated; and, again, chain lines, wire lines, and watermarks. Thus, as we prepare to meet the needs of digital natives, we both re-familiarize ourselves with the codex and model for our students the sort of engagement that leads to analysis.

Mindful of the needs of teachers, students, and researchers of bibliography and book history, and of the new possibilities available through digital modelling, Canada Research Chair in Humanities Computing Dr. Ray Siemens invited Drs. Claire Warwick (University College, London), Brent Nelson (University of Saskatchewan), Alan Galey (University of Toronto), Paul Dyck (Canadian Mennonite University), and Richard Cunningham (Acadia University), to the University of Victoria’s Electronic Textual Cultures Laboratory (ETCL) to take an early modern book apart for the purposes of creating an electronic resource for research in, and the teaching and modelling of, bibliography and book history. The progress we made in the lab would not have been possible without the substantial assistance of ETCL project manager Karin Armstrong and programmers Mike Elkink and Greg Newton. In the remainder of this paper I will describe the planning process undertaken before we entered the lab, our experience in the lab, and our results.


1.0 Planning

Before gathering in Victoria, agreement was reached that one book would be disassembled toward the goal of developing something that would help the scholars involved and the wider scholarly world to think about modelling print technology through a digital medium. In the end, two books had to be taken apart to achieve this end. This need to use two books rather than one was a direct result of the planning undertaken prior to entering the ETCL. Initially, the idea was to digitize an early modern book in a single day in the lab. The opportunity for this project came when Dr. Siemens rescued a selection of early modern books from the discount bins of a couple of London’s antiquarian book stores. These books were inexpensive because undesirable either to individual or institutional collectors.[4] Since there was only one day to use the facilities of the ETCL to re-create a print object as a (set of) digital artefact(s), the need to scale back from the planning to digitize an entire book was very soon recognized. A decision was therefore made to limit the attempt to only a single gathering, rather than an entire book.

In order to maximize productive output during lab time, we held a planning session the day before. That session produced the following principles of procedure.

1) Because there would be only one day of lab time, we would emphasise completion over complexity.

2) With the educational value of reconstituting a printed sheet from a bound gathering in mind, we decided that a fairly complex gathering (more than merely a folio or quarto) would be used so that reconstructing it would be a challenge, but not so complex that it would be too difficult for a student with only rudimentary knowledge of early modern book construction to reconstruct herself. Would a duodecimo be too complex, and a quarto too simple? An octavo, could one be found, would be ideal.

3) Because the reconstruction of a complete printer’s sheet was imagined, we decided to prefer a book with untrimmed pages, to provide one more level of information (the deckled edge) to introductory-level students.

4) For the sake of providing extra visual information, but also in the interests of providing as much bibliographic data as possible, we would prefer a book in which a clear watermark and chain lines were visible.

5) Despite the comparative undesirability and unimportance of these books to collectors (and implicitly therefore to scholars), we decided that the final criterion for selecting a specimen would be the looseness or deterioration of the binding. Within such a book, we would choose the gathering that would require the least amount of cutting to remove it.

6) To the best of everyone's ability and availability (given that everyone would be involved in the work itself), we agreed to keep as extensive a record as possible of our procedure and of each decision made in-process.[5]


2.0 In the lab

Despite the fact that each of the participants knows much about how early modern books came into being, it was not immediately obvious to anyone how to go about removing a gathering such that it was damaged as little as possible and so it could be re-assembled at a later date, e.g. by a single curator, another team, or by students in a book history or bibliography class. All were deeply concerned over the risks posed to books that had, until now, survived hundreds of years of use and abuse at the hands of either well- or indifferently-intentioned people.

Nonetheless, in the lab, a folio and a quarto were immediately dismissed, even though the quarto was the most interesting book per se. [6] All the books were trimmed, which is to say that no pages in any of the possible choices had deckled edges. Of the remaining books, a sixteenth-century edition of Terence in octavo was chosen because it fit the preferred-format criterion; but though the paper was an interesting mixture of material, it had no sign of a visible watermark. The book on which we chose to focus the majority of our efforts was Jacques-Benigne Bossuet's Discourse sur Histoire Universelle (1771) in duodecimo. It has very clear chain lines and two (as it appeared at the time) identifiable watermarks in a single gathering, both along the outer edge of the page, where we would easily be able to capture them digitally. In choosing this book, however, the principle of choosing a comparatively simple collation had to be sacrificed.

To start, the Discourse sur Histoire Universelle was laid open at the gathering in which we discerned what we thought were two watermarks but subsequently recognized as a watermark and a countermark.[7] First, we determined the start and the end of the gathering; then, we very gently pulled at the gathering while we held the book securely in order to expose the cord (also sometimes called a thong) to which each gathering is sewn during the binding process. With the cord as exposed as possible, we used a thin sharp blade to cut through it and any residual glue from the binding process. The excision of the gatherings was documented during the disassembly of both Bossuet's Discourse and Terence's Comedies using one of the ECTL's digital video cameras. The result can be viewed at <http://etcl-dev.uvic.ca/public/book/movies>. The gathering we chose to represent from the Discourse had pages stuck together due to water damage, and although this proved to be only a minor irritant for the experiment, it calls attention to the care required in choosing the right book or books to take apart. While one would not want to disassemble a book in pristine condition, choosing one that is too dishevelled or damaged might pose other, possibly insurmountable, problems. Removing the gathering left an obvious hole in the book, making it unnecessary to mark the spot from which pages had been removed. The process to this point took slightly less than one hour.

Although the goal of an electronic version of the gathering had been articulated, it was not obvious even what such a version would be, much less what steps should be taken to create it. Would it be better to scan the printed sheets that formed the gathering, or to photograph them? If a decision were made to scan, should optical character recognition (OCR) software be used, or would the reproductive power of a simple image be better? These questions forced the experimenters into a conversation about the nature of the objectives being pursued. Ultimately, the emphasis on book history outweighed interest in the history of this particular book, much less interest in its text (i.e. of its printed content), and so we decided to produce images of the artefactual pages.

Little experimentation was required to prove that the ETCL’s Epson scanner was able to out-perform the Laboratory’s 8 mega-pixel camera. [8] Scanning at 1200 dots per inch (dpi) in tagged image file (tif) format produced extremely high resolution images, examples of which can be seen at <http://etcl-dev.uvic.ca/public/book/images>; but due to their high resolution, downloading them across the Internet will generally be impractical and will always be difficult for those using a low speed (i.e. non-broadband) connection. At 1200 dpi, scanning took twelve minutes per exposure (not including image processing). With the duodecimo format of the Discourse, this meant that scanning the complete gathering would take a total of nearly two hours (8 scans x 12 minutes = 96 minutes + time between scans to place and remove the sheets). It might have been ideal to scan at an even higher resolution, probably somewhere between 2,000 and 4,000 dpi, but the one-day limit on laboratory time meant that this was out of the question. The ETCL had a second scanner available with an optimum output of 600 dpi. Using this scanner could have significantly reduced scanning time, but at the obvious cost of reducing scan quality and loss of visual information.


3.0 Results

The results of this one day’s experimental labour are offered as the first deposits to an archive of digital material intended to contribute to studies in bibliography and book history. This deposit is sub-divided into seven component parts: 1) casting_off; 2) data; 3) discourse_reconstruction; 4) images; 5) impression; 6) lightbox; 7) movies. All of these can be accessed, thanks to the University of Victoria, their Humanities Computing and Media Centre, and the ETCL, at <http://etcl-dev.uvic.ca/public/book>. Not all of these seven sub-directories will prove to be of equal value in the classroom, and similarly not all of them will be of equal benefit to those conducting research into book history or into practices of digitization.

In the “casting_off” directory, some extremely basic text analysis is provided under the heading “View the average line length in the top half vs. the bottom half of a page”. This directory represents a tangential line of investigation undertaken in the spirit of pure, unplanned experimentation. The result is an example of rudimentary text analysis that may nonetheless help educators communicate the concept and method of digital text analysis to students. The series of numbers (indicating line length) provided in “casting_off” are valuable in considering the work of a compositor. How hard would the compositor(s) have worked to keep the number of characters and spaces (each representing “1” in our calculations) consistent from line to line, from half-page to half-page, and from page to page? To a great extent such consistency is demanded by the printing press’s forme, but might aesthetic considerations have come into play, too? Not all the data necessary to fully address questions of production demands versus production desires have been gathered or presented here, and perhaps this is not even the appropriate direction in which to set off in search of such data; but within the confines of time, expertise, and temporally-compressed imagination available to the experimenters, this was offered as a start.

Also to be found in “casting_off” are edited versions of OCR-generated pages from the French text of the Discourse. It should be noted at this point that by midday we realized we would only have time to digitize one of the two gatherings we had dis-covered, and that the Discourse was felt to have more representative power for our purposes, so we abandoned Terence. Upon clicking on “View the OCR text by page” a reader is liable to suspect browser incompatibility because all that appears is a drop-down menu form with “97” in it, and a Submit button. But the menu will allow the reader to choose numbers from 97 to 120, and pressing Submit after choosing one will cause a narrow column of left-justified text to be displayed. This column, narrow as it is, presents the full text of each page from 97 to 120. We hoped that the strangely narrow display will help students realize (especially when they compare it to the pdf images to be found in the next sub-directory, “data”) how much thought went into the early presentation of text, and that in printing and book-making our contemporary sense of what is natural is entirely constructed, socially conditioned, and the result of surprisingly high levels of abstract thought in the early years of print culture.[9]

Under “data” the reader will find the raw notes we kept during the day in the lab, the raw OCR text, and a folder containing pdf images of the Discourse. An argument could certainly be made that keeping lab-notes is a very different activity than keeping reading-notes, and one that might profitably be taught to Arts and Humanities students in anticipation that one day they might find themselves engaged in laboratory-based research. In this last folder within the “data” sub-directory can be found comparatively low resolution images of the entire gathering and higher resolution images of only some pages from the gathering. It should be noted that the images of the entire gathering are low resolution only in comparison to the extraordinarily high resolution supplied elsewhere on the site.

The third sub-directory, “reconstruction”, provides a Flash animation of the Discourse gathering assembled and mounted by Mike Elkink from a page-turning source file provided by the O’Reilly Web DevCenter. The “wow!” factor of seeing an effective digital recreation of the familiar medium of the book seems to impress students in a way that seeing the individual parts laid out does not. In all likelihood this has more to do with the dynamism and interactivity of the page-turning effect in comparison to the static presentation offered by the pdfs and html displays than it does with the whole being greater than the sum of its parts; but when considered in this light, it does provide a useful point of departure for classroom discussion of the value of book-length versus broadsheet printed artefacts (then and now).[10]

The “images” sub-directory offers numerous images produced from the work completed during the single day in the ETCLab. The images are offered as Joint Photographic Experts Group (more commonly jpeg) and Sony Raw Files (or srf). The latter files are far too large for any but the fastest connections or most patient readers. There are also further sub-directories within the “images” directory, and each of these folders is descriptively named. The images, even in jpeg format, re-size extraordinarily well in Firefox, Opera, IE, and Safari, enabling easy and effective display of the chain lines, watermarks and countermarks, moisture stains, bleed through (of the ink, overleaf), and other pertinent details. In truth, the technology makes study of many bibliographic features easier and more certain because it renders miniscule visual qualities of paper, ink, and binding string easier to see and examine.[11]

Reproductions of the impositions of various formats can be found in the “impression” <http://etcl-dev.uvic.ca/public/book/impression/> sub-directory.[12] These provide means for teaching students how formes were set for printing and how paper was folded after being imprinted. These impressions will also assist those who can benefit from a reminder of how to arrange the loose leaves into sheets when they access the sixth (and last) interactive sub-directory, labelled “lightbox.”

The functionality of the “lightbox” sub-directory is thanks to the diligence of Greg Newton, who adopted the Virtual Lightbox developed by Amit Kumar and Matthew Kirschenbaum, currently supported by MITH.[13] In the “lightbox” the reader can choose to re-arrange the leaves of either the outer or inner forme of the duodecimo imprint that forms the gathering removed and subsequently digitized from the Discourse. Upon choosing one of these options, the reader will be prompted to accept the “unverified” Java? applet published by Amit Kumar. Do so, and one will see a menu across the top of the browser window with a diagram of the duodecimo imposition just below that, in the upper left-hand position. The other six images on the “lightbox” stage are the six imprints that go into the creation of the Discourse sheet. With the diagram as a guide, the user can rearrange these impressions to recompose the original printed sheet. This activity provides a sense of the complexity involved in arranging, printing, folding, and cutting the original artefact.[14] Having immediate access to the imposition diagram is likely to prove invaluable for all but the most expert bibliographers. By demonstrating the difficulty of reconstruction, we teach students about the challenges involved in composing print-ready pages and thus disabuse them of the commonly held belief that the past was a simpler time, with technologies that pale in comparison to the complexity of technologies in our own time. It also, of course, offers the opportunity to appreciate the printer's art through the re-construction of a virtual print artefact at a medial stage between the printed sheet of paper and the folded gathering.

The final sub-directory, “movies”, contains video files of some of the actual work performed in the lab and is really of interest only in the strictest context of archiving in the digital era. If it supplants the inadequate lab notes, perhaps this is testimony to McLuhan's dictum that the medium is the message.


Conclusion

As more texts and more people are born digital there will be an ever-increasing need to teach bibliography and the history of the book to retain the medium that for half a millennium has conveyed, and thereby shaped, almost all of western culture's messages: from the most important to the most trivial. But if bibliography and book history should be taught for reasons of presentation and cultural preservation, it should also be taught for reasons of information architecture. The book is not a naturally occurring form; it is not the natural way for information and knowledge to be developed, contained, and preserved, nor even a natural way. The book evolved through a series of historical and cultural accidents and decisions described by Carla Hesse as the modern literary system (22; cf. Nunberg 16). As we now find ourselves in a time similar to that in which the well-evolved codex form came more or less permanently under the influence of Gutenberg’s printing press, we digital immigrants and digital natives alike need to be fully conversant in the technical as well as cultural and economic forces that made the book such an influential and determining factor in the evolution of modernity. Thus, a digital archive that preserves and presents features of the early printed book in a way that enables those features to be accessed and manipulated (often in ways that would not be possible using a physical book) offers a means by which scholars and students can remind themselves of the past as they design for the future. The researchers involved in this experiment see their work and the results produced as a first step toward the creation of such an archive.

The rapidity with which the term “born digital” shifted from the inanimate object to the human subject in only four short years should give us pause. Sometime between the middle and end of the fifteenth century, “book” probably made a similar transition; it probably ceased referring to a manuscript codex and came to refer, in common usage, to a codex containing printed pages. According to such scholars as Marshal McLuhan and Walter Ong, neurologist Gary Small, and business author Donald Tapscott, whether or not the medium is the message, it is almost certain to shape the brain of the reader and the reader's understanding of the world and her place in it. Thus, as the term “media” comes to occlude “digital” in the way it previously occluded “print”, educators must use digital media to teach digital natives about “the historical achievements of print” (McCarty 220); that is, about print and book history, so they will be effectively educated for analysing media and the messages they communicate. And although that occlusion might not yet be complete, it is not too early to encourage educators across the disciplines to engage in a species of ostranenie or defamiliarization that will remind us of the ever-growing gulf between digital immigrants and digital natives. To fail to do so might be to needlessly surrender opportunities for communicating to digital natives in media they both understand and, more importantly, prefer. Whether or not the medium shapes the message to such an extent that the message changes with the medium, if students disengage because the media of education are too slow or too foreign, then neither educators nor education has a chance. To avoid such a bleak future, education as an institution and as a concept will have to meet students on their own terms, and those terms are increasingly and indisputably, digital. Those who work in book history and digital humanities – the two areas of humanities research that have enjoyed growth over the past two decades while so much arts and humanities education has contracted and watched enrolment decline – will be in no way surprised to see that the experiment and archive described in this article represent a large step toward meeting tomorrow's students in their own culture. In addition to offering this first installment toward a more fulsome archive for the teaching of bibliography and book history, perhaps the experiment that led to its creation also offers a good model for how humanities scholars and teachers can collaborate and network in a way that mimics Web 2.0 culture, and in so doing communicates to students in their own media environment.



Works Cited

Anon . “Born Digital: Children of the Revolution.” Wired 10.09 (2002). Print; rpt. 20 Jan. 2008. Web. <http://www.wired.com/wired/archive/10.09/borndigital.html>.

Bolter, David J. and Richard Grusin . Remediation: Understanding New Media. Cambridge, MA: MIT, 2000. Print.

Bossuet, Jacques-Bénigne . Discours sur L'histoire Universelle, a monseigneur Le Dauphin: Pour expliquer la fuite de la Religion, & les changemens des Empires. Paris, 1771. Print.

Brown, John Seely . “Growing Up Digital: How the Web Changes Work, Education, and the Ways People Learn.” Change 32.2 (2000): 11-20. Print.

Cicero, Marcus Tullius . Oraisons Choisies: Latines et Françoises. Traduction Nouvelle. Tome Premier. Lyon, 1728. Print.

───. Oraisons Choisies: Latines et Françoises 2, Lyon, n.d. Print.

Dane, Joseph . The Myth of Print Culture: Essays on Evidence, Textuality and Bibliographic Method. Toronto: U of Toronto P, 2003. Print.

Estius, Guilielmus . Dn Guillelmi Estii s. theologiæ doctoris ... In quatuor libros sententiarum commentaria: quibus pariter S. Thomæ summæ theologicæ partes omnes mirifice illustrantur . Paris, 1696. Print.

Gaskell, Philip . A New Introduction to Bibliography. 1972. New Castle, DE: Oak Knoll, 2006.

Gauch, Sarah . “At a Mountain Monastery, Old Texts Gain Digital Life.” New York Times 4 Mar. 2004, New York; rpt. 20 Jan. 2008. Web. <http://www.nytimes.com/2004/03/04/technology/circuits/04monk.html?pagewanted=1&ei=5007&en=336680830314c574&ex=1393736400&partner=USERLAND>.

Grafton, Anthony . “Future Reading: Digitization and its Discontents.” The New Yorker 5 Nov. 2007, New York: 5 pp; rpt. 20 Jan 2008. Web. <http://www.newyorker.com/reporting/2007/11/05/071105fa_fact_grafton>.

Hayles, N. Katharine . “Translating Media: Why We Should Rethink Textuality.” The Yale Journal of Criticism 16.2 (2003): 263-290. Print.

Hesse, Carla . “Books in Time.” The Future of the Book. Ed. Geoffrey Nunberg. Berkeley: U of California P, 1996. 21-36. Print.

Innis, Harold . The Bias of Communication. Toronto: U of Toronto P, 1951. Print.

Jameson, Fredric . The Prison-House of Language. Princeton: Princeton UP, 1972. Print.

McCarty, Willard . Humanities Computing. New York: Palgrave Macmillan, 2005. Print.

McGann, Jerome J . Radiant Textuality: Literature After the World Wide Web. New York: Palgrave Macmillan, 2001. Print.

─── . The Textual Condition. Princeton: Princeton UP, 1991. Print.

McKitterick, David . Print, Manuscript and the Search for Order, 1450-1830. Cambridge: Cambridge UP, 2003. Print.

McLuhan, Marshall . “Electronic Revolution: Revolutionary Effects of New Media.”

Ed. Stephanie McLuhan and David Staines. Understanding Me. Toronto: MacLellan & Stewart, 2005. 1-11. Print.

Nowlan, Alden . Between Tears and Laughter. Toronto: Clarke, Irwin & Co., 1971. Print.

Nunberg, Geoffrey . “Introduction.” The Future of the Book. Ed. Geoffrey Nunberg. Berkeley: U of California P, 1996. 9-20. Print.

O'Donnell, James J . “The Pragmatics of the New: Trithemius, McLuhan, Cassiodorus.” The Future of the Book. Ed. Geoffrey Nunberg. Berkeley: U of California P, 1996. 37-62. Print.

─── . Avatars of the Word: From Papyrus to Cyberspace. Cambridge, MA: Harvard UP, 1998. Print.

Shillingsburg, Peter . From Gutenberg to Google: Electronic Representations of Literary Texts, Cambridge: Cambridge UP, 2006. Print.

Small, Gary and Gigi Vorgan . iBrain: Surviving the Technological Alteration of the Modern Mind. New York: HarperCollins, 2008. Print.

Stepanek, Marcia . “Data Storage: From Digits to Dust.” Business Week Apr. 1998; rpt. 20 Jan. 2008. Web. <http://www.businessweek.com/archives/1998/b3574124.arc.htm>.

Suckling, John . Fragmenta Avrea. London, 1648. Print.

Tapscott, Donald . Grown Up Digital: How the Net Generation is Changing Your World. New York: McGraw-Hill, 2009. Print.

Terence . a M. Antonio Mvreto Locis Prope Innvmerabilibvs Emendatus & lectionum varietate recèns auctus, Frankfurt am Main, 1592. Print.

Wolf, Maryanne . Proust and the Squid: The Story and Science of the Reading Brain. New York: HarperCollins, 2007. Print.

Endnotes

[1] A further classification, beyond chronological age, can be used to distinguish between those born into the industrialized world and those born into what is described as the developing world (as though industrialism is a higher evolutionary state), but as this can be assumed, it will not be dwelt on in the body of the paper. A similar glossing-over of socio-economic class is justified by the comparatively small numbers of those who will have entered higher education from backgrounds too poor to own a computer and too ignorant to make use of a public library computer terminal.

[2] Our emphasis is on early modern books because books of more recent vintage used machine-made paper that lacked the useful features of hand-made paper, and the high acidity of most machine-made paper means books that use this material will not survive as long as early modern books already have survived, and will survive into the future.

[3] I do not mean to assert that the book will disappear any time soon. It seems to me unlikely it will ever disappear. What I mean to suggest is that the intimate experience of the physical codex form is likely to diminish in the lives of even economically advantaged students in whose homes books have long been a staple of childhood experience. Books will remain part of the furniture of their experience, but as e-paper and other alternatives become more mainstream, the paper-based book may become more and more unfamiliar to students, much as the cabinet-encased radio is now little more than an historical curiosity.

[4] Undesirability in the antiquarian book trade is typically due to one, or some combination, of three things: damage, surplus (higher supply than demand), or inherent lack of interest in the book’s form or content.

[5] See sub-directories “data” à “Book Digitization Re...” and “movies” for the written and videographic records in their rawest forms. Below I will discuss these and the other sub-directories made available at http://etcl-dev.uvic.ca/public/book/.

[6] The books from which we were able to choose were the following:

Jacques-Bénigne Bossuet, Discours sur L'histoire Universelle (1771).

Willem Hessels van Est, Theologiæ Doctoris (1696).

John Suckling, Fragmenta Avrea (1648).

Terence, a M. Antonio Mvreto Locis Prope Innvmerabilibvs Emendatus & Lectionum Varietate Recèns Auctus (1592).

Cicero, Oraisons Choisies: Latines et Françoises (1728); and another, undated edition. (Both editions of Cicero are in quarto and are interesting not just because they are Cicero and are transliterated, but also because there is evidence in both texts of gatherings having been inserted from more than one imprint. For example, there seems to be a gathering printed in 1723 in the 1728 First Book.)

[7] On the countermark, see Gaskell 62.

[8] By comparison, in 2004 the Monastery of St. Catherine was using photographic equipment that enabled it to produce images with a resolution of 75 megapixels. See Gauch.

[9] Joseph Dane's The Myth of Print Culture notwithstanding, there is currently no more instantaneously communicative term for the incunabula, early modern, and modern periods during which Gutenberg's influence was widely, profoundly, and unavoidably felt than the term “print culture”.

[10] See Hesse 27 on the book’s “mode of temporality” as compared to other print artefacts.

[11] “In a 64-exposure image of a 10th-century manuscript of the Gospels, written in gold leaf, the luster on the Greek letters seems as realistic as if one were viewing the actual book, and in a close-up the uneven texture of the gold leaf is crystal clear” (Gauch). http://www.nytimes.com/2004/03/04/technology/circuits/04monk.html?pagewanted=1&ei=5007&en=336680830314c574&ex=1393736400&partner=USERLAND Accessed January 7, 2008.

[12] Reproduced under conditions of fair use from Gaskell, 88–101.

[13] The Maryland Institute for Technology in the Humanities <http://www.mith2.umd.edu/>.

[14] See image at <http://etcl-dev.uvic.ca/public/book/> sub-directory “impression,” image “impose-2-1.tif.