Introduction

Research in the humanities is digital.

The extent to which that is true may vary, but as a general rule, humanists read texts on computers, tablets, and e-readers; find articles in digital libraries; take voice notes on their phones; consult Wikipedia; write using computer keyboard on word processing software; videoconference with collaborators; manage secondary sources on Zotero; export documents as PDFs; import calendar events; apply through Google Form or ScienceConf; browse online catalogues; and send emails to publishers. While remnants of analog mediums survive in the humanities, they do not threaten the dominant digital practices (Burdick et al. 2016). The impacts of digital tools on research practices are varied: choices enable and constrain us and, most importantly, shape our work and thought processes (Fiormonte, Numerico, and Tomasi 2015). Computers do not merely replicate analog artifacts such as typewriters, library cards, and books; they represent a completely different kind of object (Hayles 2007).

We define the idea of a general theory of humanities scholars’ interaction with digital tools as both a method and a shared set of principles and rules, to ascertain the effects of any given specific technological assemblage on a given community of practice. Tools are not politically neutral; they are designed and operate within ideological, material, and epistemic contexts (Foucault 1975). Moreover, they form and inform our perceptual and epistemic horizons, direct our research, and organize our networks of knowledge production and dissemination (Baird 2004; Rogers, Singhal, and Quinlan 2014). Scholars need theoretical frameworks to ascertain that and understand how digital tools influence their work through their affordances and orientations.

The task at hand is twofold: first, to establish the foundational principles to study digital tools and their effects on humanities research; second, to recognize different forms of expertise and experience with these tools. We then propose that a general theory should be a collaborative endeavour, and that it ask of us to accept the validity and importance of many different approaches to digital tools in the humanities. Finally, we elaborate a protocol to bring together scholars with radically different relations to digital tools and steer them towards a greater understanding of each other’s digital practices and cultures.

What is a tool?

In Gesture and Speech, French archaeologist and anthropologist André Leroi-Gourhan marks the centrality of tools in human history:

The whole of our evolution has been oriented toward placing outside ourselves what in the rest of the animal world is achieved inside by species adaptation [such as] our unique ability to transfer our memory to a social organism outside ourselves. (Leroi-Gourhan 1993, 235)

Tools can extend our senses and abilities, but their interactions are more subtle and complex: through networks of affordances, orientations, and practices, they become actants (Latour 2005) with political and cultural effects on knowledge production and discovery (Bijker, Hughes, and Pinch 1993). The consequences of certain research tools is so fundamental that it has led many times in human history to complete shifts of epistemic paradigms (Kuhn 1997; Hayles 2012). The credibility of science is tied in with the precision and design of its tools (Baird 2004), while humanistic research is enabled and limited by material networks of knowledge and the tools used to support thought and memory, such as libraries, writing implements, and, more recently, computers (Fiormonte, Numerico, and Tomasi 2015).

To theorize a tool is to situate it within a wider ecosystem of tools and humans, and to understand how it integrates and interacts with these structures. Through this process, we are led to appreciate tools as material, cultural, and political artifacts (Winner 1980). In other words, they are mediators rather than intermediaries, and “Mediators transform, translate, distort, and modify the meaning or the elements they are supposed to carry” (Latour 2005, 39). Importantly, tools have affordances and orientations. According to Evans and colleagues, affordances are the constructed and learned intersection between an artifact’s properties (its features) and what is done with the artifact (the outcome) (Evans et al. 2017). They are political in nature just as feature and design choices inscribe themselves within a given ideological and cultural context (Norman 1988).

Technologies are designed, implemented, and used through webs of choices. […] Each choice—explicit or implicit—reflects and affects value orientations, sociostructural arrangements, and social dynamics.

Because values are not neutral and tend to reinforce power and status structures, technologies are often infused with the politics of the powerful. [The] mechanisms and conditions framework begins with the assumption that if left unchecked, technologies will arc toward privilege and normality. (Davis 2020, 14)

The tendency towards normality of tools goes both ways: tools are designed for normal use and are normalized through use. More broadly, the orientation of a tool is its relation to an ideological paradigm and power dynamics, either in its design, use, or context (Foucault 1975).

Are digital tools different?

Is the difference between analog and digital tools one of degree, or one of kind? In theory, any and all tasks done by a computer or a network of servers could be accomplished, given enough time and resources, by a human. However, many of these tasks would take an impractical or impossible length of time, which would prohibit these tasks from ever being accomplished. Computers enable such functions to be implemented and, by doing so, enable processes, open new lines of inquiry, and allow us to work at new scales. Digital tools displace memory and some cognitive processes outside of the human brain; they represent a step on what Katherine Hayles dubs biotechnoevolution: “a hybrid process in which information, interpretations, and meanings circulate through flexible interactive human-computational collectivities” (Hayles 2019, 32). Intermediation—the flow of information from one media to another, and from humans to digital media—allows for the emergence of complexity and new meanings (Hayles 2007; Hayles 2019). Hayles describes the paradigm-shifting potential of digital tools in these words:

The new wrinkle is the power of computers to perform cognitively sophisticated acts. Compared, say, to a hammer or stone ax, a computer has much more flexibility, interactivity, and cognitive power. In addition, computers are able to handle both natural language and programming code, capabilities that allow them to function in complex human-computer networks. (Hayles 2007, 102)

Research in the humanities is not simply supplemented by computers, it is fundamentally changed (Ingvarsson 2021; Hayles 2012); our contact with digital architectures informs, forms, enables, and limits us; “space is the context of the action: it makes it possible, and it shapes it” (Vitali-Rosati 2016, 96). However, these effects can be hard to detect when digital tools are normalized, sometimes to the point of becoming invisible.

The naturalization of digital tools

Science and media become transparent when scientists and society at large forget many of the norms and standards they are heeding, and then forget that they are heeding norms and standards at all. (Gitelman 2008, 7)

A tool is naturalized or essentialized when it stops being thought of as a tool, but as part of the normal order of things or as extensions of its user (Gitelman 2008). In terms of digital tools, naturalization takes two forms that both involve invisibilization:

  1. A tool becomes invisible when its presence goes unnoticed or is taken for granted (Underwood 2014).

  2. A set of design principles is naturalized when it becomes the default and all other possible design choices are thus invisibilized (Vitali-Rosati 2024).

In many cases, the naturalization process of digital tools is the result of design as much as use: the current trend is to produce hardware and software that are specifically ready-at-hand (Heidegger 2002) and follow the functional imperative (Vitali-Rosati 2024). Digital tools tend to blend in a larger digital ecosystem and sometimes even reinforce themselves as monopolies (Smyrnaios 2016). This is most easily seen in everyday life where we, for instance, Google things; Zoom call people; are proficient in Office; Photoshop pictures; Venmo cash; and Uber from one place to the next. The genericization process does not seem to be as pervasive within humanist research practices, although many assumptions can be made on which tools—hardware, software, internet services, databases—are used when co-editing (Google Docs or Microsoft 365), searching for secondary sources (Google Scholar or WorldCat), redacting a paper (Word), putting together a presentation (Powerpoint or Google Slides), reading an article (Acrobat Reader), etc. More important, for most research activities, one can safely assume the use of a screen, a keyboard, a mouse or a trackpad, and internet access. This naturalization process covers and invisibilize a set of design principles that underlie most editorial choices in digital tools. This set of principles that constitutes the affordances of digital tools is based on the orientation of a few companies that cater to corporations first and consumers second (Smyrnaios 2016).

Orientation and affordances of digital tools

Every system is infused with choices, formed by technical challenges, and directed towards precise usage: digital tools bear the traces of a developer’s, a corporation’s, or a community’s intents, interests, and limits (Vitali-Rosati 2016). For instance, there is a common set of design principles—such as ease of use, streamlining, and uniformity—that characterizes the GAFAM (Google, Amazon, Facebook [now Meta], Apple, and Microsoft), which makes them quite attractive to new and established users alike, leading a larger user pool to adopt hegemonic methods and technologies (Norman 2002). In Éloge du Bug, Marcello Vitali-Rosati focuses on the functional imperative, intuitivity, and the rhetoric of immateriality:

  1. The functional imperative is the design principle according to which technology should “simply work.” It prioritizes efficiency and functionality over other considerations such as ethics or culture. This imperative leads developers and designers to limit freedom, autonomy, and agency, and it normalizes the invisibilization of the protocols and algorithms that support a seemingly “flat” technological infrastructure (Vitali-Rosati 2024).

  2. The rhetoric of immateriality is a common aspect of the discourse around technology that sets the material and immaterial apart and establishes one as valuable and pure (the realm of idea), and the other as trivial and dull (the material) (Vitali-Rosati 2024). This discourse affects digital technology by encouraging designers and developers to obscure or bury the reality of materiality and labour under a product that seems seamless. Digital tools try to subdue themselves by asking of their users natural gestures and as little interaction as possible with the back end of their hardware and software.

  3. Intuitivity is a design principle that pushes for the alignment of function with natural or previously established behaviours (Norman 2002; Vitali-Rosati 2024). Its main goal is to create systems that can be learned and used effortlessly by any user who is familiar with a given network of affordances. Intuitivity limits the scope of what applications and hardware can do to functions that fit in that previously established network.

The homogenization of digital tools outside of academia has an impact on research practices: software and networks developed for the corporate world or the consumer market now shape social sciences and the humanities, since many research tasks have been digitized. The design principles that direct search functions (Underwood 2014), text editors (Fauchié 2018), and editorial chains (Vitali-Rosati 2016), among others, exemplify how digital tools that were not designed with scholarly work in mind have been adopted by universities and the academic community. In many cases, the naturalization process has been manufactured: educational pricing leads to the introduction of corporate tools as soon as first grade, and some technologies like recommendation algorithms are so opaque that they are not forgotten, but rather unknown (and, in certain cases where machine learning is involved, unknowable). The GAFAM strategies are only some of many possible strategies in developing a digital tool, but they reveal their orientation, and thus the orientation of their tools. However, these values do not constitute the only possible set of design principles: other guiding principles can lead to wholly different approaches to digital tools with their own epistemic framework and implications (Fauchié 2018). But technological resistance, the use of tools that follow alternative digital approaches, is difficult and can require going against established practices or institutional decisions.

Theorizing digital tools

Scholars should have the means to understand how their use of digital tools affects the épistème—we borrow the term from Foucault (Foucault 1975)—in which they produce knowledge and receive the necessary training to evaluate the effects of specific software, hardware, database structures, and networks on their research. The last decades have seen many attempts at theorizing the effect of individual tools on research in the humanities. The naturalization of these tools, their impact on research practices, and their homogenization lead to scholarly work in media studies (Gitelman 2008; Hayles 2012), technology studies (Bijker, Hughes, and Pinch 1993), editorial studies (McGill 2018; Fauchié 2018), epistemology (Vitali-Rosati 2024), and many other fields. A general theory of scholars–digital tools interaction should apply to the following aspects of research with digital tools in the humanities, and their impact on knowledge production:

  • User interface

  • In-text search function

  • Data availability and data gathering

  • Secondary and primary source search

  • Database structures

  • Editorial chains

  • Text processing

  • Reading technologies

  • Communication technologies

  • Communication protocols

  • etc.

Case study: The theorization and naturalization of the search bar

In Theorizing Research Practices We Forgot to Theorize Twenty Years Ago, Ted Underwood discusses the “search” tools, a “deceptively modest name for a complex technology” (Underwood 2014, 64). In this article, Underwood describes how the search function has been normalized and naturalized within academia to the point where its results are almost never challenged. However, the principles that underlie a simple search function are numerous and far from neutral; from the editorial choices behind a database structure to the care with which keywords were chosen by authors and publishers, to the display interface of the recommendation algorithms that filter and order results, most search functions are black boxes with a hidden agenda. More importantly, these same recommendation algorithms are invisibilized and the search bar naturalized: results are unquestioned, the shape of the tool is taken for granted, and other forms of search are unexplored. Underwood’s article exemplifies the need for a general theory of scholars–digital tools interaction: a conceptual framework researchers can use to understand how their relation to their computers, software, and databases affect their work. If a tool as simple, hegemonic, and ubiquitous hides such complexities and orientations, every digital tool warrants similar scholarly scrutiny.

Relation between scholars and digital tools: A toy model

In the previous section, we outlined the main principles by which one might theorize a given digital tool in their own practice. However, developing a general (or generalizable) framework to ascertain and understand the effects of digital tools on scholarly research in the humanities requires an interdisciplinary effort. Humanists have varied relations to computers and software, based on culture, épistème, training, needs, and goals. The last section presents a possible protocol to breach the divide between different communities of practice and epistemic horizons to collaborate on developing a theory of humanities scholars–digital tools interactions. To facilitate the gathering of humanists with diverse relations to technology, we introduce in this section a “toy model,” a simple heuristic representation to characterize and visualize these different relationships to digital tools. Although quite simple, this representation reveals some of the complexity of the subject, underlining the importance for different voices and forms of expertise in this endeavour.

Variables of a heuristic toy model

Our “toy model” characterizes scholars along seven axes with values between 0 and 1:

  • Usage: Represents how frequently a scholar uses digital tools compared to analog ones. It can be modelled as the percentage (between 0 and 100%, the later being represented by the value 1) of academic work conducted with digital tools rather than traditional, analog methods.

  • Breadth: Reflects the variety of digital tools a scholar employs in their research. At one end, a scholar might use only a few digital tools like email and a word processor while relying heavily on printed material. At the other end, a digital humanist might write with a diverse array of tools such as HedgeDoc, Google Docs, Obsidian, Stylo, Visual Studio Code, GitHub Editor, and Notepad, switching between them based on specific needs or preference. This scale is more arbitrary than the first one; the following function takes the number of digital tools used on a weekly basis as its independent variable x, and yields a value close to 0 for 1 or 2 tools, and gradually increases, reaching 0.5 for 15 tools, and approaching 1 around 30 different tools:

    fx=50tanhx155+50

  • Competency: Describes the level at which a scholar uses their digital tools. Most digital tools have functionalities that a casual user might not even be aware of. Competency is the ability to navigate specific tools beyond normative usage, or to master practices that either require advanced technical skills or have a high learning curve. This scale, also between 0 to 1, as well as the next ones, might require scholars to self-assess, or could be based on an extensive test of their general competencies with, knowledge about, and general attitudes towards digital tools.

  • Knowledge: Measures a scholar’s understanding of the underlying principles, protocols, and algorithms behind the software and hardware that they use. For instance, a user with a very good knowledge of statistical methods could deploy these methods outside of the digital media and without specialized software.

  • Criticality: Assesses the scholar’s ability and willingness to engage critically with digital tools and methodologies. This axis involves questioning the biases, limits, and assumptions, in other words, the affordances and orientations of digital tools and how they influence their work as well as themselves and others.

  • Resistance: Captures the scholar’s skepticism or reluctance to use and adopt digital tools within their work. Resistance covers one’s preference for analog tools, biases against new technologies, and concerns about privacy, technological determinism, agency, etc.

  • Bias: Represents the scholar’s posture towards digital tools and the results they produce. If they are more likely to reject information and knowledge produced through digital technologies, they are said to have an oppositional posture, and if they are more likely to accept it at face value, they have a hegemonic posture (we borrow these terms from Stuart Hall [Hall 2001]). For the sake of the visualization, we set a bias of 0 to be the hegemonic posture, a bias of 1 to be the oppositional position, and 0.5 to be a negotiated posture where some information is taken at face value and other is rejected depending on the context.

This framework enables us to distinguish axes that might otherwise be conflated, especially in the humanities where, for instance, the distinction between knowledge and competency is often blurred. In more technical fields, to know and to do are well separated: one can use a tool without knowing how it works, and one can know how a tool is supposed to work and be unable to operate it. Furthermore, to be biased against a tool is different from being critical of a tool, which in turn does not always correlate with rejecting a tool. Also, a hyper-user could only use a very small selection of digital tools, leading to very little breadth, but have high competency, and vice versa.

This model doesn’t capture more subtle aspects of a scholar’s relation to digital tools such as personal experience, cultural relation to technology, and scholarly goals, among others. It is a heuristic tool to enable the formation of a varied, multidisciplinary team that covers many different relations to technology in humanistic research. To overcome the limits of their epistemic horizon, one first must see its edges, and be willing to consider practices and épistèmes beyond their own. The development of a general theory of digital tools in humanistic research has to take into account the varied experiences and expertise of humanists; doing otherwise would be an epistemic mistake.

Four archetypes

To exemplify the model, we deploy four stereotypes—the vigilant, the tinkerer, the classicist and the adopter—and showcase their profiles using a radar chart with the seven variables of our toy model. The vigilant and the tinkerer can be found in Figure 1, while the classicist and the adopter can be found in Figure 2.

Figure 1
Figure 1

Radar charts for the vigilant and the tinkerer according to the seven axes (scale of 0 to 1).

Figure 2
Figure 2

Radar charts for the classicist and the adopter according to the seven axes (scale of 0 to 1).

  • The vigilant tries to use only technologies they fully understand. This limits greatly the breadth of tools they are willing to use, but they have mastery over a few select software. They are motivated by data security, but also attachment to the computer they bought 15 years ago and fixed themselves too many times to count. The vigilant openly criticizes their institution’s choice of handing over their digital infrastructure to Silicon Valley, and they do not carry a smartphone.

  • The tinkerer is a technological anarchist with little regard for proprietary software or copyright laws. They launch open-source applications from the command line on their Linux laptop, and when they cannot find the right software for the task at hand, they develop it themselves. The tinkerer is likely to have a pile of old, broken computers at home to scavenge from according to their needs, and only an angry editor can get them to sign up in Office 365.

  • The classicist’s expertise does not lie in digital tools, which they use sparsely and would gladly replace by a typewriter. The classicist knows the most useful functionalities of Word (the 1997 version) and Outlook and has no intention of ever learning anything else beyond that. Their favourite piece of hardware is the printer, and they are strongly biased against e-readers.

  • The adopter wants to use the best tool on the market, although it sometimes means paying for a subscription service. They are willing to not know what is in the black box, and they put the emphasis on results rather than process, which does lead to impressive feats of analysis on large datasets. They are not afraid of software with steep learning curves but would prefer not to code anything from scratch themselves.

The vigilant, tinkerer, classicist, and adopter are extreme cases in terms of their relation to digital tools—most humanists fall somewhere between these caricatures—but they try to capture different forms of expertise and, more importantly, different experiences of digital tools.

Limits and affordances of the model

This heuristic model focuses on measurable—or at least, comparable—characteristics to showcase the plurality of postures a scholar might hold towards digital tools. The main point is that these differences extend beyond a scholar’s horizon and which processes they entrust to technology. Indeed, this model ignores cultural patterns, which “are primarily found in three areas: within a people’s values, their patterns, and their institutions [and] establish the parameters within which a people think and speak” (Simonton 2015, 31). The model also assumes access to digital tools, which cannot be taken for granted, as well as the freedom of choice in which tools they use. (Institutional rules, publishing practices, material situations and other more or less authoritative structures can direct or limit these choices.) Hence, the model is only a steppingstone in understanding one’s own technological, cultural and cognitive horizon, acknowledging other practices and épistèmes, and in gathering a diverse team in the context of a multidisciplinary discussion—a plurilogue—which we describe in the next section.

Protocol for a collaborative theory of digital tools in humanistic research

The necessity for collaboration stems in part from the concern that most of our relations to technology are based on ideology (Davis 2020; Vitali-Rosati 2024; Bilić, Prug, and Žitko 2021), warped by our technological horizon (Hayles 2012), and naturalized through practice (Gitelman 2008; Underwood 2014): no academic silo is sufficient to generalize a theory of scholars–digital tools interactions. The toy model proposed in the previous section makes it easier to characterize participants, and thus facilitates the formation of a team of scholars who complement each other. The present section gives a framework and guidelines for a multidisciplinary collaboration between scholars of various relation to digital tools. It suggests a structure to bring together a team of collaborators with radically different digital practices to put in sharp perspective ideological biases, technological blind spots, and opposing perspectives through a process that includes shadowing other researchers, adopting unfamiliar methods, and defamiliarization during a collaborative project. Multidisciplinary collaboration is a challenge on multiple levels: on the surface, scholars have different methods, goals and references, but a more profound problem has to do with subtle yet powerful cultural, philosophical, and linguistic differences (“all cultures have typical ways of expressing themselves, and these are so ingrained that there is little conscious awareness of them” [Simonton 2015, 31]): their horizons and experiences are at odds. Collaborating scholars are at risk of not understanding each other, or worse, of falsely believing that they understand each other.

Why a collaborative theory?

In Can We Be Wrong? The Problem of Textual Evidence in a Time of Data, Andrew Piper describes how a multidisciplinary team collaborated on his study of generalization in scholarly publishing. Piper’s team was comprised of three professors and four students from seven different fields ranging from literary studies to biomedical ethics.

The first step involves assembling a team of researchers from different disciplines and different academic levels. Doing so allows us to transcend individual as well as disciplinary biases and blind spots. If we want to have a generalizable understanding of the practice of generalization, then it follows that more diverse points of view that participate in the modelling process will help make resulting models more generally applicable. (Piper 2020, 17)

Piper describes the ensuing discussions as “tumultuous” and “vibrant”; the resulting book is part of the Cambridge Elements in Digital Literary Studies series and avoids many pitfalls a mono-disciplinary team or single author might have fallen into. Can We Be Wrong? is a striking example of multidisciplinary collaboration on a subject that is both lacking a general theory and matters for many academic silos. The present protocol systematizes Piper’s approach and avoids the usual pitfalls of acculturation: the process by which one learns a different culture from one’s own.

Structure for a plurilogue

1. Survey the fields

Create an online survey that lets scholars create their own radar figure, situating themselves along the seven axes. The survey would cover the digital tools that are used in various fields, from text editors to programming languages, from e-readers to the ability to physically repair computers. The survey would also offer scholars the possibility to elaborate on their relation to digital tools and experience using them. If they wish to be contacted for the rest of the plurilogue, they should be able to leave their contact information.

2. The gathering

Working from the list of willing respondents from the first step, the organizers use the seven axes as well as the written answers to create teams (multiple plurilogues can occur at once) with radically different approaches to digital tools and contact future participants. According to which respondents answer when contacted, they form teams of at least four scholars to begin the next step.

3. Establish a common vocabulary and goal

The organizers offer participants a list of terms and concepts taken from media studies, technologies studies, and epistemology to define a common vocabulary. They present the aims of the plurilogue, as well as its epistemological underpinning, so that the teams have a common framework in which to collaborate. Once more, the real danger is not the possibility that participants might not be able to understand each other, but rather the possibility that they might think that they do.

4. Define/explicify individual relations to digital tools

The participants are asked to reflect, write, and present on the subject of their own relation to and use of digital tools. This step fosters self-examination and introspection, and prepares the members of the team to understand the extent, limits, and importance of their relation to technology by comparing their own experience with others’.

5. Solve misunderstandings

The participants and organizers work side by side to dive into the presentations of each member of the teams. They ask questions, discuss, and deconstruct each other’s answers until the risk of misunderstanding is minimized.

6. Shadowing

Participants take turns showcasing how they use technology while other members of the team observe critically their methods. This step is also an occasion for participants to ask questions and reflect collaboratively on the various tools and methods being showcased by other scholars.

7. Familiarization (and defamiliarization)

Each participant is asked to design and undertake a short project using the methods and tools presented during the previous step. This short project asks for each member of each team to familiarize themselves with a different technological subculture: its tools, its questions, its limits and affordances, while defamiliarizing themselves from their own technological and epistemic horizons.

8. Theorization

The team regroups to share on their experience of the previous step, and then tries to establish a method and a shared set of principles and rules to ascertain and understand the effects of any given specific technological assemblage on a given community of practice. This attempt should incorporate the discoveries and experiences of each participant and be synthesized for other teams and the organizers as a new steppingstone in the development of a general theory of scholars–digital tools interactions in the humanities.

9. Reevaluation of the plurilogue

The final step is to reevaluate the structure of the plurilogue and suggest modifications or improvements for further collaborative work. The question of adapting the plurilogue to other questions of interest to the participant should also be addressed during this step.

The plurilogue allows room for scholars to recognize their own relation to digital tools, and how their affordance and orientations shape their works. It also requires the participants to acknowledge different types of experiences and expertises, as well as their value, both in theory and in practice. This protocol aims at detecting and correcting blind spots and biases to overcome epistemic barriers in developing collaboratively a general theory of digital tools in humanistic research.

Conclusion

The case of scholars–digital tools interaction would be a good case study to test the suggested method, in part because such a theory would matter, but mostly because divisions between communities of practice are visible. The current period of time, in which scholars who were trained on analog tools and methods work side by side with a younger generation of scholars for whom digital tools are often taken for granted is of particular importance. In the more specific context of computational literary studies, Jeffrey M. Binder wrote: “those of us who live on the cusp of its emergence may be much better poised to see than future generations” (Binder 2016). This lesson carries in the more general inquiry into digital tools in the humanities; theorizing their use will become more hazardous as the memory and experience of an analog world fades from existence.

Although it is fitting that the impulse for a theorization of scholars–digital tools interactions in humanistic research would come from the digital humanities community, it is not something we should undertake alone: we need to recognize the variety of communities of practice both within and outside digital humanities. Furthermore, we have to consider the digital knowledge and expertise of scholars who do not consider themselves digital humanists: their experience will almost certainly help to cover our own blind spots and reveal our biases. In this endeavour, our role is to emphasize how various communities of practice are shaped, influenced, limited, and enabled by the tools they use, and to kickstart the collaboration to theorize these effects. The alternative is to ignore the many gaps in our understanding of how digital tools affect our work and that of others—which the humanities and social sciences have mainly been doing so far—and to cut ourselves off from the valuable perspectives of our peers.

Competing interests

The author has no competing interests to declare.

Contributions

Editorial

Special Collection Editors

  • Jason Boyd, Toronto Metropolitan University, Canada

  • Bárbara Romero-Ferrón, Western University, Canada

Copy and Production Editor

  • Christa Avram, The Journal Incubator, University of Lethbridge, Canada

Layout Editor

  • A K M Iftekhar Khalid, The Journal Incubator, University of Lethbridge, Canada

References

Baird, Davis. 2004. Thing Knowledge: A Philosophy of Scientific Instruments. Berkeley: University of California Press.

Bijker, Wiebe E., Thomas Parke Hughes, and Trevor Pinch, eds. 1993. The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology. Cambridge, MA: MIT Press.

Bilić, Paško, Toni Prug, and Mislav Žitko. 2021. The Political Economy of Digital Monopolies. Bristol: Bristol University Press.

Binder, Jeffrey M. 2016. “Alien Reading: Text Mining, Language Standardization, and the Humanities.” In Debates in the Digital Humanities 2016, edited by Matthew K. Gold and Lauren F. Klein, 201–217. Minneapolis: University of Minnesota Press. Accessed November 18, 2024. https://dhdebates.gc.cuny.edu/read/untitled/section/4b276a04-c110-4cba-b93d-4ded8fcfafc9.

Burdick, Anne, Johanna Drucker, Peter Lunenfeld, Todd Presner, and Jeffrey Schnapp. 2016. Digital_Humanities. Cambridge, MA: MIT Press.

Davis, Jenny L. 2020. How Artifacts Afford: The Power and Politics of Everyday Things. Cambridge, MA: MIT Press.

Evans, Sandra K., Katy E. Pearce, Jessica Vitak, and Jeffrey W. Treem. 2017. “Explicating Affordances: A Conceptual Framework for Understanding Affordances in Communication Research.” Journal of Computer-Mediated Communication 22(1): 35–52. Accessed October 15, 2024.  http://doi.org/10.1111/jcc4.12180.

Fauchié, Antoine. 2018. “Markdown comme condition d’une norme de l’écriture numérique.” Réél-Virtuel 6. Accessed October 15, 2024. http://www.reel-virtuel.com/numeros/numero6/sentinelles/markdown-condition-ecriture-numerique.

Fiormonte, Domenico, Teresa Numerico, and Francesca Tomasi. 2015. The Digital Humanist: A Critical Inquiry. Santa Barbara: Punctum Books.

Foucault, Michel. 1975. Surveiller et punir. Paris: Gallimard.

Gitelman, Lisa. 2008. Always Already New: Media, History, and the Data of Culture. Cambridge, MA: MIT Press.

Hall, Stuart. 2001. “Encoding/Decoding.” In Media and Cultural Studies, edited by Meenakshi Gigi Durham and Douglas M. Kellner, 166–176. Malden, MA: Blackwell Publishers Ltd.

Hayles, N. Katherine. 2007. “Intermediation: The Pursuit of a Vision.” New Literary History 38(1): 99–125. Accessed October 15, 2024.  http://doi.org/10.1353/nlh.2007.0021.

Hayles, N. Katherine. 2012. How We Think: Digital Media and Contemporary Technogenesis. Chicago: University of Chicago Press.

Hayles, N. Katherine. 2019. “Can Computers Create Meaning? A Cyber/Bio/Semiotic Perspective.” Critical Inquiry 46(1): 32–55. Accessed October 15, 2024.  http://doi.org/10.1086/705303.

Heidegger, Martin. 2002. On Time and Being. Chicago: University of Chicago Press.

Ingvarsson, Jonas. 2021. Towards a Digital Epistemology: Aesthetics and Modes of Thought in Early Modernity and the Present Age. London: Palgrave Macmillan.

Kuhn, Thomas S. 1997. The Structure of Scientific Revolutions. Chicago: University of Chicago Press.

Latour, Bruno. 2005. Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford: Oxford University Press.

Leroi-Gourhan, André. 1993. Gesture and Speech. Cambridge, MA: MIT Press.

McGill, Meredith L. 2018. “Format.” Early American Studies 16(4): 671–677. Accessed October 15, 2024. https://www.jstor.org/stable/90025725.

Norman, Don. 1988. The Design of Everyday Things. New York: Basic Books.

Norman, Don. 2002. “Emotion & Design: Attractive Things Work Better.” Interactions 9(4): 36–42. Accessed October 15, 2024.  http://doi.org/10.1145/543434.543435.

Piper, Andrew. 2020. Can We Be Wrong? The Problem of Textual Evidence in a Time of Data. Cambridge, UK: Cambridge University Press.

Rogers, Everett M., Arvind Singhal, and Margaret M. Quinlan. 2014. “Diffusion of Innovations.” In An Integrated Approach to Communication Theory and Research, edited by Don W. Stacks, Michael B. Salwen, and Kristen C. Eichhorn, 432–448. New York: Routledge.

Simonton, Mark Edward. 2015. “The Liturgical Inculturation of Time: Calendrical Progression in the Anglican Church of Canada.” PhD diss., School of Theology of the University of the South. Accessed October 15, 2024. http://hdl.handle.net/11005/3628.

Smyrnaios, Nikos. 2016. “L’effet GAFAM: stratégies et logiques de l’oligopole de l’internet.” Communication & Langages 188(2): 61–83. Accessed October 15, 2024. https://shs.cairn.info/revue-communication-et-langages1-2016-2-page-61?lang=fr.

Underwood, Ted. 2014. “Theorizing Research Practices We Forgot to Theorize Twenty Years Ago.” Representations 127(1): 64–72. Accessed October 15, 2024.  http://doi.org/10.1525/rep.2014.127.1.64.

Vitali-Rosati, Marcello. 2016. “Digital Architectures: The Web, Editorialization, and Metaontology.” Azimuth: Philosophical Coordinates in Modern and Contemporary Age ( 7)1: 95–111. Accessed October 15, 2024.  http://doi.org/10.1400/245309.

Vitali-Rosati, Marcello. 2024. Éloge du bug: être libre à l’époque du numérique. Accessed October 15. https://hdl.handle.net/1866/33126.

Winner, Langdon. 1980. “Do Artifacts Have Politics?” Daedalus 109(1): 121–136. Accessed October 15, 2024. https://www.jstor.org/stable/20024652.