Introduction
The pervasive influence of computing devices, languages, and protocols on our modes of experiencing and representing the tangible world calls for a reexamination of the interrelationships between syntax and semantics, executability and performativity, matter and culture. The aim of this paper is to provide support for the hypothesis that these traditionally opposed notions have equal dignity and, moreover, are ontologically inseparable realities (see Vitali-Rosati 2021). The ontological parity between these terms is a necessary prerequisite to avoid the alternative that sees, on the one hand, the naturalization of computing devices, thus considered in absolute continuity with respect to previous technologies, and on the other hand, the complete identification of these devices with pure mechanical processes, alien to any kind of sense production, which would lead to the invisibilization of the semantic scope of the devices themselves.
Following an initial theoretical presentation of this hypothesis, the reference to the TCP/IP protocol model will provide a concrete example to elucidate this essential inseparability between the semantic and mechanical components of information technologies. This investigation will present TCP/IP from a mathematical–analytical point of view, its mechanical and physical characteristics, and its cultural and political implications, which are too often regarded as the sole determinants of the protocol suite’s semantic and social impact. Our aim is to demonstrate that these three elements are so intricately intertwined that they should be considered ontologically indistinguishable: any distinction between them arises solely for the purposes of analysis and understanding, rather than reflecting their fundamental nature.
We will show that the analytical models underlying the protocols that make up TCP/IP must take into account both the types of messages that can be transmitted through them and the concrete contexts of implementation, such as bandwidth quality or the computing power of clients and servers. Similarly, the physical contexts in which the protocol is implemented not only influence the executability of TCP/IP from a mechanical point of view, but also determine the concrete organization of the administrative, political, and cultural spaces occupied and related by the internet networks themselves. In consequence, the cultural scope of the protocol is not only determined by its social and political context, but also by the geographical circumstances and physical materialities that constitute the internet and the devices that participate in it.
Before turning to the case study of TCP/IP, we will elucidate the theoretical approaches and methodologies that are pertinent to a humanistic interpretation of dispositions, protocols, algorithms, and computer languages. Indeed, our main objective is to delineate the conditions of possibility for a hermeneutics of digital objects.
Code, devices, and human beings: A state of the art
Digital technologies structure the world in which we live. They determine reality temporally and spatially, and influence representations (i.e., our ways of understanding the world as a totality composed of stable and clearly defined objects). Take mobile devices, for example. They encourage us to continually construct our online presence and direct our attention to building other presences, systematizing deambulatory and scriptural gestures (Cavallari 2017). We walk with our heads down, drumming on the screen to reply to a message, or we move through space following the line indicated by our trusted browser, running our fingers over the surface of the device to better understand where to go. These gestures play a central role in the constitution of our sphere of action and perception, particularly in determining how we experience and understand the city (Cavallari 2017). Similar dynamics can be observed in many other contexts: the production and circulation of information, economic transactions, and the way we define and embody ideas of personhood and memory are now shaped and enabled by computer technologies. Confronting these devices is, therefore, essential to any reflection on the present times.
In a complementary way, trying to understand the meaning of these devices calls into question our way of knowing and living in the world. Milad Doueihi’s notion of “digital culture” theorizes precisely this mutual influence, emphasizing the significance of digital devices in the production and circulation of knowledge. This is a radical transformation that concerns “the very nature of the objects of our knowledge” and “the space that is supposed to host and circulate them” (Doueihi 2011, 11).
To clarify this argument, Doueihi refers to the concept of friendship, showing that digital culture takes up and, at the same time, profoundly reshapes the classical definition of this notion. Doueihi mentions in particular the cases of Facebook and Wikipedia (Doueihi 2011, 81). On the one hand, the Aristotelian definition of friendship presupposes a condition of equality and equivalence between subjects, thus excluding any hierarchical order or power relationship. On the other hand, while Facebook presents a radicalization of the concept, establishing a community that forgets individual specificities and needs, Wikipedia excludes any personal relationship between users, identifying the concept of equality with the anonymity of authors. Whereas in the Facebook model, sharing is pushed to the extreme and concerns all domains, Wikipedia initiates an interaction focused exclusively on knowledge, which, in this case, corresponds to objective information.
Digital culture does not exist separately from culture as a whole, and the models it proposes depend directly on it. Doueihi’s example of the notion of friendship shows us that digital tools are capable of modifying our world because they are essentially part of it.
In other words, the transformative impact of a device is not an invention created ex novo, but is also determined by the cultures, practices, institutions, material availability, and spatial and temporal dynamics within which the device itself is constituted and exerts its influence. Marshall McLuhan’s famous expression—“medium is the message” (McLuhan 1994)—could be integrated into these considerations. The message depends on the way the medium takes into account and formats reality, a process that is never neutral. Reality can be transformed by the medium, even radically, but cannot, in turn, fail to have an impact on the constitution of the medium itself.
The emerging field of critical code studies is concerned precisely with the mutual influence between digital devices and the systems and cultures in which they are embedded (Marino 2020). By identifying computer code as the key to the study of this interdependence, critical code studies seek to observe the cultural significance of code beyond the specific function for which it was created. Code is understood as an artifact, conceived and implemented in a material and social context. Code is, in fact, defined by its history, its uses, and by “interconnections and dependencies with other codes, materials, and cultures” (Marino 2020, 31).
From this observation, Mark Marino can affirm that the study of the technical structure of code reveals much about the material and cultural context in which it is conceived and used: “analyzing code to better understand programs and the networks of other programs and humans they interact with, organize, represent, manipulate, transform, and otherwise engage. Reading code functions as an entry point to reading culture” (Marino 2020, 44). If code is conceived as an epistemological tool for understanding such interactions, its veracity is guaranteed by its hybrid and medial nature.
Indeed, code is defined by Marino as the guarantor of the encounter between human intentions, always rooted in culture, and the material structure of the computer device. Code is thus understood as the expression and concretization of human thought: “[…] most code is an expression of thought in a kind of shorthand, typically not an ends [sic] but a set of symbols that emerge from the process of making something else” (Marino 2020, 234).
According to this approach, code is the medium of human thought in the technical context of the machine (Marino 2020, 154). As a symbolization of thought, it becomes a representation of human discourse. Marino can thus assert the possibility and necessity of developing a hermeneutic reading of computer code, in order to understand the systems and cultures that determine it. It is therefore necessary to better define the object of observation in order to carry out such a hermeneutic analysis.
To the question “What can be interpreted?” Marino answers: “Everything. The code, the documentation, the comments, the structures, the compiled versions—all will be open to interpretation,” arguing that computer code analysis must take all these elements into account (Marino 2020, 44). A few lines further on, Marino argues and the importance of reading code according to the audience for whom it was written. In this context, the audience corresponds to those who will try to understand or modify its structure and to the machine that executes the code. The audience guarantees the existence of code as a written object to be understood and implemented. In addition to the author’s biases and culture, code is thus also determined by the expectations and abilities of its readers. Among its readers, the machine guarantees its executability (i.e., its agentive nature). The transformative nature of code as the embodiment of thought is thus ensured by its conformity to the machine. The reflections proposed by critical code studies show that code is an intrinsically agentive and performative artifact.
The terms with which Mark Marino defines computer code are certainly useful to understand its impact in our culture. However, the complete assimilation of code to the representation of human thought interacting with a machine to achieve something else could entail problematic consequences. To fulfill this role, code would have to stand between two radically different elements: the author’s thought, his culture, and the device on which this thought wants to act. In turn, to fulfill this intermediary function, the code would have to be an object shaped by both parties, enabling the two poles to communicate and contaminate each other. Only in such a way could it provide the key to the understanding of computing devices and human culture that Marino was looking for.
In this respect, Wendy Chun notes that code is often regarded as the essence of digital devices because the nature of its executability is not sufficiently problematized (Chun 2008). From her point of view, to establish that source code is at the heart of human–machine communication is, in fact, to presuppose executability as an inherent and innate characteristic of code. Such a presupposition would be a methodological ploy to avoid problematizing the way in which code’s effectiveness is guaranteed. According to Chun, the negation of the problem of executability leads to the misleading assimilation of code to the “source” of the computing device.
From this point of view, considering code as an intermediary between human culture and machine operation is to postulate an unjustified cause-and-effect relationship between the two terms. Code would be the abstract object responsible for this relationship. In the words of Chun, the causal function attributed to code would make it a “fetish” or a “demon,” as it would confer upon code a form of innate autonomy, inexplicable but indispensable to the use and understanding of any computer object (Chun 2008).
Ed Finn defines the common understanding of the concept of algorithm in a similar yet complementary way. In fact, he shows that for common sense, an algorithm is the concretization in an application or program of an otherwise incomprehensible and radically different intelligence. He writes: “the algorithm becomes the mechanism of translation: the prism or instrument by which the eternally fungible space of effective computability is focalized and instantiated in a particular program, interface, or user experience” (Finn 2017, 35).
Like Chun’s remarks, from this perspective, the notion of algorithm indicates an intermediate object, which refers to an ulterior meaning and materializes it in a given form. In this case, the symbolized meaning is identifiable neither in the nature of the device, nor in human thought, but precisely in the computational intelligence, beyond the device itself. The algorithm would concretize this form of thought, making it effective in our material world and therefore intelligible to us, as if the algorithm gave meaning to the program. Like the code fetish theorized by Chun, the algorithm becomes the unfathomable intermediary imagined ad hoc to explain a causal relationship, which is, in fact, problematic. In other words, if code establishes a causality between human thought and the material structure of the machine, the notion of algorithm described by Finn establishes a similar relationship between human thought and computer intelligence.
Despite the differences between the concepts discussed, the difficulty of justifying the exchange of information and the mutual influence between opposing terms is evident in both cases. Yet this influence seems undeniable, as evidenced, for example, by the cited reflections of Peppe Cavallari, Milad Doueihi, Marshall McLuhan, or Mark Marino. Digital devices are made up of ideas and programs, human needs and technical constraints, culture and code—the code which is thought, written, and executed. In the computer device, these components exist “at the same level,” as Simondon would say (Simondon 2012, 175). Understood in the terms described above, code and algorithm are supposed to guarantee this synergy between human and machinic components but fail to actually explain it. They therefore invite us to define this synergy more precisely.
Referring to Judith Butler’s notion of performativity (see Butler 1997), Chun proposes to circumvent the problem of support by asserting the priority of execution over code. Code would emerge from execution, which would be characterized by rigorous iterability (Chun 2008). In turn, the performative iterability of execution would be supported and enabled by an institutional, political, and machinic structure. However, in this way, the paradox of communicability between human and machine would risk being simply transposed and unresolved: we would have to define the dynamics that give rise to such a political and machinic structure, for the omission of such a definition would leave room for the same questions that Chun herself had raised with reference to the risks associated with the essentialization of source code. Like the algorithm or the code, such a performative structure could, in fact, embody a form of causality that is not fully justified.
Faced with this impasse, it seems necessary to rethink the notion of computer code and algorithm, which, as Chun points out, remain too essentialized and abstract in the manifesto of critical code studies. Instead of seeking an instantiation at the heart of the communicability between human intentions and machinic operations, Ruha Benjamin looks at how algorithms of facial recognition, crime prediction, machine reading, etc. systematize and thus reinforce ideas and judgments that are too often discriminatory (Benjamin 2019, 123). In this context, computer code is not defined in terms of its technical function, but neither is it assimilated to mere source code, nor is it understood as pure technical executability, independent of any conceptualization. Focusing on their systematizing role, Benjamin analyzes a long series of computing devices in terms of their performative character. The focus here is on the coded information inscribed in material apparatuses (Benjamin focuses specifically on the discriminations and prejudices encoded in multiple algorithms of artificial intelligence). In this sense, the code does not merely reflect the intentions or representations of a single author or a specific set of authors (Benjamin 2019, 12). Rather, a complex of material entries is represented, or more accurately, inscribed in computer systems. Providing a potential concretization of Chun’s proposal, Benjamin’s approach enables us to reconsider the concept of computer code from a systemic standpoint, redirecting our focus from the challenge of harmonizing human representation and machinic logic to the necessity of observing the actions that these inscriptions in digital devices represent and engender.
The interpretation of information inscribed materially in the computing device necessitates the application of a hermeneutics of material inscriptions, along the same lines as what David Wellbery, referring to Kittlerian methodology, called “post-hermeneutics” (Wellbery 1990). Here, the hermeneutic priority of meaning, the need to make sense of something, is replaced by the observation of factual, hence material, dynamics. In line with Ruha Benjamin’s analysis, the object of Kittlerian hermeneutics is not the intention or textual content as such, but rather resides in the material dynamic at the centre of which the object being analyzed is located. Theories pertaining to the new materialism permit the integration of discourse on meaning into this material and factual observation. Here, having lost its priority role, meaning is assimilated and integrated into matter.
Indeed, new materialistic approaches focus on the role of matter in the constitution of reality and, more profoundly, denies any ontological opposition between meaning and matter (Gamble, Hanan, and Nail 2019; see also Vitali-Rosati 2020). The performative role of the computing devices described by Chun and, in different terms, by Benjamin could thus be assimilated to such a material reality.
From this perspective, human intelligence, logic and formal thought, algorithms, the structure of technical devices, the executability that determines code, and the other elements we’ve mentioned so far are immanent to matter. As components of the same ontological content, these elements can only be thought of in relation to matter and are constantly determined by it (Barad 2007). It would therefore no longer be necessary to imagine a medium to explain this mutual influence. Let us now endeavour to elucidate the potential of this approach for a (post-)hermeneutic analysis of digital devices.
It seems particularly useful to apply this perspective to the study of computer protocols, as they involve the systematization of shared practices, resulting from a negotiation between different needs, political forces, and economic and material resources. As the results of this explicit negotiation, protocols establish the models and standards that facilitate or optimize certain processes. It is therefore the negotiation itself that constantly defines their meaning (Galloway 2004, 243).
Protocols also determine the production processes and skills development that perpetuate its standardization activity, reinforcing their influence on the communities that implement and help define them. At the same time, the implementation of a protocol implies a selection and systematization of formats and computer languages. Its implementation triggers a series of choices and standardizations that are not part of the protocol model as such but are nonetheless essential to its effective existence. Protocols as theoretical models implemented in concrete realities are thus clear examples of the interdependence between computing and cultural systems.
Taking the TCP/IP protocol as an example, we will examine the articulation of the interrelationship between three notions introduced previously, citing the work of Marino and Finn: culture, formal logic, and material context. We are particularly interested in TCP/IP, since it is the protocol implemented by the internet; it is therefore one of the theoretical and functional systems that guarantees the extensive diffusion of digital technology in today’s world (Ryan 2010). Much of the production and circulation of information, economic transactions, and human interactions are now shaped and enabled by such a standard. We therefore hope that this case study will be a starting point for the hermeneutic analysis of other digital objects and, in general, for further research into the integration of digital technologies in our world.
Matter and thought
The reflections put forward by Mark Marino on the cultural significance of computer code, by Peppe Cavallari on the reconfiguration of space due to the use of digital devices, or by Milad Doueihi on the notion of digital culture, show us that computer technologies are structured by a sort of collaboration between different instances ascribable to the traditional categories of thought and matter. This relationship would be difficult to understand if these categories were in substantial opposition. It is therefore necessary to further demonstrate the possibility of a hermeneutic of digital technologies, which arises from the rejection of any dichotomy between thought and matter.
If we were to admit the existence of an ontological difference between these two terms, we would either have to reject the singular scope of computing devices by reducing them to mere computing power, or attribute this scope to a way of thinking that is incoherent with our way of representing the world.
In either case, we would have no way of understanding the significance of computer objects. Indeed, if digital devices represented a radically different way of thinking from our own, we could only define them negatively, emphasizing the differences with our intelligence. We would otherwise risk adopting metaphorical definitions, as Chun and Finn have already pointed out. If, on the other hand, the device corresponded to a simple material substrate, we would have to think of its impact on the world as our representation, which depends entirely on our interpretive faculties. We would then fall back into a form of radical idealism. Code and algorithms would be no more than “artificial languages” determined by humans alone. In line with Gadamer’s observations in Truth and Method, they would lack “vitality,” because they would have no correspondence with sensible objects (Gadamer 2018; see also Bajer 2022).
Neo-materialism shows us a way to rethink the alternative between representation and matter and, consequently, to admit the possibility of a hermeneutic of the scope of computing devices. The adoption of such an approach cannot, however, do without a partial questioning of other central categories of the Western thought tradition, such as causality and understanding (see Barad 2003).
Consider the notion of cause. We’ve already shown that hardware devices change the way we see the world and that, in a complementary sense, our representations determine computer technologies. Excluding any hierarchy between digital devices and thought, we must assert a causal relationship between these two objects. This relationship is intended to be reciprocal and ongoing. Indeed, since these technologies are constantly evolving, and we are constantly using them in our daily lives, our way of thinking is constantly changing with them. Similarly, the evolution of digital technology is dictated by the evolution of our needs, our skills, and the availability of materials. It is a reciprocal conditioning that is prolonged over time and, more profoundly, a mutual determination. The human representation and device thus derive from this relationship, without taking part in it as autonomous and pre-constituted entities (Bozalek and Murris 2021, 40). In this context, cause and effect are not properties of distinct elements, but characterize a type of material interdependence. The concept of cause, thus re-thought, fits in with the notion of “intra-action” introduced by Karen Barad, according to which the existence of objects and concepts is ontologically rooted in the relationality that determines them:
The neologism “intra-action” signifies the mutual constitution of entangled agencies. That is, in contrast to the usual “interaction,” which assumes that there are separate individual agencies that precede their interaction, the notion of intra-action recognizes that distinct agencies do not precede, but rather emerge through, their intra-action. It is important to note that the “distinct” agencies are only distinct in a relational, not an absolute, sense, that is, agencies are only distinct in relation to their mutual entanglement; they don’t exist as individual elements. (Barad 2007, 33)
In this context, human thought is constitutively involved in the causal relationship with computing devices. The interpretation of the digital cannot therefore abstract from this relationship between human thought and machines. On the contrary, such an analysis must help determine both human representations and digital devices. Interpretation must be understood as an integral part of both matter and its constituent processes (Pitts-Taylor 2016). The hermeneutics of the digital cannot therefore consist in laying down an interpretive grid on the world, but must be an act. Karen Barad writes: “knowing, thinking, measuring, theorizing, and observing are material practices of intra-acting within and as part of the world” (Barad 2007, 91).
Although it is a form of thought, interpretation cannot be ontologically different from action. It has a productive function. As an action inscribed in the relationship that determines human and machine, interpretation reconfigures this relationship and, at the same time, our way of representing it. It is a “constructive recategorization” of reality and a modification of it (Edelman and Tononi 2001). Theory and practice, understanding and action, representation and reality, do not exist independently of each other. This has at least two consequences. The first is the end of the concept of identity. If the world is tensional activity and continuous reconfiguration, there can be no objects, persons, or concepts with an essence defined once and for all. Here, the refusal of a radical separation between representation and matter implies the rejection of other distinctions traditionally placed at the foundation of understanding, such as those between subject and object, and interpreter and interpreted.
The second consequence is linked to the first and consists in the end of the human prerogative over interpretation. Interpretation can only take place in a collaborative and relational context, in which humans are not the only actors. Again, Barad writes: “In some instances, ‘nonhumans’ (even beings without brains) emerge as partaking in the world’s active engagement in practices of knowing” (Barad 2007, 149).
To illustrate this last point, we come back to the question of code rigor. It is not our cognitive abilities that demand it. In our daily lives, we mainly refer to ambiguously defined information and act accordingly. Yet rigor seems to us to be a central property of code, useful for understanding its specificity. Hardware systems that help determine source code structures also help us to understand them.
Finally, we argue that the interpretation of computer technologies should take into account the relational nature of the object of study. If concepts and objects are particular conformations of matter, these objects must be understood from the material context that enables them to exist, and in which they are ontologically embedded. The analysis of a particular computer language, protocol, or algorithm, considered in its context, is the most appropriate starting point for understanding the needs and value system it embodies.
We will first apply these considerations to the topic of protocol. This will involve outlining some general concepts and tensions, which will then be clarified by analyzing a specific protocol, in our case the TCP/IP suite.
We have already seen that protocol is the product of a series of skills, interests, socio-political circumstances, institutions, power relations, technical instruments, material availability, and evolving monetary investments that have come together at a given historical moment. We can therefore affirm its contingent nature. On the other hand, these arrangements are based on computational models, which, because of their formal nature, are necessarily valid.
Secondly, and still as a historically determined object, computer protocol is, by definition, superseded by future technologies. This is because it systematizes and models actions designed to resolve a particular set of needs, taking into account a defined range of technologies. However, the fact that the material structure of technical protocols organizes actions in view of certain needs shows that the very configuration of computing devices systematizes and formalizes certain needs, declaring them valid in the future (see Bachimont 2010).
Thirdly, protocol is performative in nature. “Protocol is a circuit, not a sentence,” writes Alexander Galloway (Galloway 2004, 53). Its reach into the world is ensured by its systemic interaction with other programs and physical devices. At the same time, we cannot deny the stratified and radically heterogeneous nature of computing objects. We understand, for example, how binary code responds to different logics and needs from those of a PDF file. The former is designed for machine reading and execution, while the PDF format allows human reading.
This last point suggests that the oppositions raised so far, far from showing the contradictory nature of computing devices, derive from the fact that binary code and PDF format are made up of different instances and logics. Let us now consider these aspects in the light of the functioning and history of the TCP/IP protocol suite that defines the internet, in order to propose a subdivision and an understanding of its constituent instances.
TCP/IP: A case study
To provide an initial application of what has been said so far, we pose three questions regarding the structure of TCP/IP that sketch out three perspectives for understanding this protocol suite. It should be noted that the proposed tripartition is a schematization designed to facilitate the present argument, rather than a comprehensive representation of all the dynamics in which TCP/IP participates.
We will first address the nature of the analytical models that underpin the internet, and their interaction with the protocol suite in which they are implemented. Analytical models provide a logical description of the technology in question. In the case of the internet, this description concerns, for example, the transfer of data packets between clients and servers.
We will then investigate how these models relate to the physical components of the TCP/IP suite, and to what extent these material components regulate our field of action and are embedded in physical, cultural, spatial, and cognitive systems. In our case, we identify the physical components of the protocol with the devices, servers, routers, cables, and electromagnetic waves that give rise to the material reality of internet networks.
Finally, we will consider how protocol implementation systematizes our needs and value systems. We will show that the implementation of TCP/IP protocols corresponds to the concretization of formal models in a material structure and to the adaptation of the formal models in view of certain material circumstances and human culture and needs.
This case study will examine these three themes with consideration of Galloway’s analysis of TCP/IP and his characterization of protocols as systems. In examining the role of TCP/IP within contemporary societies, Galloway highlights its distributed character, which enables the coexistence of diverse sub-systems and protocols within it. He asserts that this distributed protocol organization tends to assume global dimensions, as it can standardize information treatment and circulation while remaining highly permissive.
Indeed, since the TCP/IP protocol suite lacks centralized authority, sites of resistance cannot constrain centres of power. Rather, they are integrated in a seamless manner into the protocol suite’s inherent flexible structure. Furthermore, this flexibility also allows for the incorporation of the Domain Name System (DNS). This protocol enables client applications to identify and locate servers by translating domain names into IP addresses, and it is a decentralized and hierarchical system. This last feature contrasts with the prevailing distributed approach within the TCP/IP structure. TCP/IP’s inherent multiplicity, which may seem contradictory, allows it to establish a dynamic balance between permissiveness and standardization. This balance, in turn, ensures the universality of TCP/IP and establishes it as a set of “management styles” (Galloway 2004, 81).
In light of the performative and systemic nature of TCP/IP, as demonstrated by Galloway, this case study aims to show how cultural, mathematical, and geographic logics that are not intrinsic to the narrow definition of TCP/IP influence and are influenced by the protocol suite itself. However, in distinction to Galloway’s analysis, our primary objective is not to assess the impact of the protocol suite on our societies. Instead, we want to show how the various protocols, devices, and infrastructures that make up TCP/IP are situated in material contexts, influenced by external needs and constraints.
Analytical level
We will now proceed to examine the analytical component of TCP/IP. In its most general sense, this perspective describes complexity in mathematical terms. While in other disciplines the debate on the ontological importance of these analytical models remains open, computer systems are indeed based on such mathematical descriptions. However, the nature of the internet is a particularly complex one. The internet is not a program, but a suite of protocols. Each protocol has its own functions, objectives, and structures, and can be described by different analytical models.
At the same time, computational modelling is also an interpretative, hermeneutic tool. Translating an already-documented protocol into a formal schema that represents it in an ideal world, without the constraints of its actual implementation, is useful for understanding internet protocols, and therefore for ensuring their best realization. This abstraction effort is used to explore possible implementations, improvements, or corrections to be made to these protocols or their functionalities. Analytics makes a problem comprehensible by dividing its object into manipulable parts. These manipulations and abstractions define in ever-changing ways the nature of the internet.
The range of actions we can perform on online softwares is also determined by such models. Word processing, the physical and hardware design of our devices, the structuring of programming languages (deterministic automaton), and so-called artificial intelligence (context-free grammar) make extensive use of such formal models, to cite just a few examples from Michael Sipser’s text Theory of Computation (Sipser 2012, 3). These models are of performative interest, in the sense that they contribute to describing and shaping the way we write and manipulate the world through computer systems.
Nevertheless, these models must take into account actual implementation scenarios, since their impact on the structure of the internet depends on their implementability and executability.
To illustrate, one might consider the testing of TCP protocol modifications or additions prior to their actual implementation. In terms of technical relevance, such experimentations must be based on realistic traffic patterns, as close as possible to concrete cases (Allman and Falk 1999). Even prior to experimentation, potential network congestion, packet loss, or discrepancies between server and client windows must be modelled and addressed from an analytical perspective (Khalifa and Trajkovic 2004). Indeed, understanding and modelling the possible contexts of implementation is imperative for any protocol model with a claim to effectiveness. In computing, physical development does not exist without analytical conceptualization. Similarly, the value and relevance of analytical models is contingent upon their applicability in concrete and functioning systems, which may be implemented in suboptimal contexts.
In addition to the physical conditions of implementation, the effectiveness of the analytical models underlying TCP/IP is intrinsically linked to the type of information to be transmitted. As an example, one might consider the role of the TCP/IP transport layer, which is responsible for ensuring the successful exchange of data between clients and servers. The two principal protocols in this conceptual layer are TCP and UDP. As a permissive, connection-less, and state-less protocol, UDP allows for a much faster and streamlined exchange of data than TCP. The latter takes up some of the available bandwidth to send acknowledgment data, which confirm the successful receipt of sent packets.
In cases where the UDP protocol model is adopted (it happens, for example, for video streaming and gaming) the speed of data transmission and optimized bandwidth utilization is preferred over the certainty that all packets sent have reached their destination. Speed and lightness are in this case considered more aligned with the needs of the client-side application, at the expense of the overall reliability of the transmission. The selection of one protocol approach over another connotes qualitatively the type of exchange performed and, therefore, the information transmitted.
As the logical–mathematical conceptualization of the protocol is specified, the conditions of its physical implementation and the meaning of the transmitted messages are also necessarily detailed. Analytical models are not isolated systems; rather, they are constructed with a view to our daily actions and, in general, to the physical context in which they will exist. We will now elaborate on this physical context.
Physical level
The physical component of the internet is constituted by spatially extended devices that are inextricably linked to the value we ascribe to them, and our ability to predict their outcomes. This section highlights how the mutual influences between human cultures, devices, and physical space negate any causal consequentiality between these terms.
We begin by emphasizing the influence of the physical configuration of protocols on human actions and practices. To illustrate, consider the case of ethernet cables, part of the Link layer of the TCP/IP protocol suite. The arrangement in space of an ethernet cable implies the existence of a device capable of connecting to it. In turn, the human who interacts with the cable and the computer to get an ethernet connection follows a series of rules and constraints already inscribed in the physical structure of the two objects. This physical structure determines the devices’ ability to consolidate “the repetition of a gesture whose effectiveness has been proven in the past” (Bachimont 2010). The future of our actions is inscribed in the functioning of ethernet cables. This inscribed future will manifest itself from time to time in our daily actions, but it will in fact remain in matter. It is not bound to the presence of a specific, individual, corruptible user.
Nevertheless, it can be posited that the opposite relationship also exists, namely that of the influence of human needs, practices, cultures, and values on the physical conformation of protocols. A rather simple example to explain this idea is the spread of USB-C ports. Their popularity is due to their slim shape, in line with our need to carry very light personal computers, and to the fact that they have two identical sides, which prevents wear and tear on our computer doors caused by repeated human error when inserting older USB keys. Human needs shape the physical structure of USB-C ports, which in turn will constrain and guide future human actions.
Similarly, document formats transmitted over TCP/IP are dictated by application protocols such as HTTP, which specify the format of the requested information through the “Accept:” header. However, these formats are also designed and selected according to the type of information we need or want to exchange online. The constraints imposed by the TCP/IP application layer, which in turn are shaped by human needs, reflect the influence of our practices and cultures on the technical components of the protocol suite.
The devices described here embody an ontological relationship between the devices themselves and their material constituents and the human cognitive ability to orient itself spatially and/or chronologically in order to perform the gesture of inserting a USB-C key or an ethernet cable, or designing and interacting with a web application. Similarly, all internet protocols are designed to meet a range of both machinic and human constraints and needs. Protocol structure concerns our practices and shapes the physical context in which we live, but it is also precisely described, prescribed, and therefore continually updated by these same practices and contexts.
The same reciprocity of influences, lacking a definitive causal priority, is also evident in the case of the physical contexts and socio-cultural dynamics in which protocols are implemented. To clarify this point, we first consider the extent to which the physical context influences the conformation of protocols. Let us again take the example of the link layer within the TCP/IP model, with a particular focus on the wi-fi family of protocols, which are commonly used to access the internet. Since wi-fi connections are provided by electromagnetic waves, the adopted wi-fi protocol determines the frequency of transmission of those waves. The adoption of a specific wi-fi protocol depends on the availability of devices and routers capable of picking up a given frequency and converting internet data packets into electromagnetic waves. Additionally, the eventual presence of one or more external devices capable of interfering with the selected frequency must be considered. Accordingly, the wi-fi protocol calls for the fabrication of such repeaters and the examination of other devices present in the area of interest. However, its implementation depends on the features of the particular deployment context.
We will now turn our attention to the interrelationships between socio-cultural tendencies and the physical context of protocol implementation. The installation of a wi-fi network in a public setting inevitably entails the formulation of security-related decisions pertaining to user access (e.g., authentication methods, encryption standards, and network segmentation). Concurrently, however, these security choices also depend on the level of protection administrators or communities choose to have, and on which type of data are considered to be more private than others within those communities. The implementation of a wi-fi protocol thus also depends on a number of value attributions.
This overview shows that it is not feasible to establish a definitive causal hierarchy between protocol, physical context, and human cultures and actions. The definition of each of these terms is contingent upon the definitions of the others. It is therefore not possible to consider these three elements as ontologically separate entities. To further elucidate this inextricability, let us now direct our attention to the observation of the cultural aspects that constitute the TCP/IP protocol suite.
Cultural layer
TCP/IP was created during the Cold War, within the Advanced Research Projects Agency (ARPA) organization, created by the US Department of Defense in 1958 to find an adequate response to the Soviet launch of Sputnik (Abbate 1994, 3). It is generally stressed that the internet protocol suite was implemented because of two fundamental objectives, both related to the military context of its origins.
The first objective would be the need to enable the exchange of highly reliable information. Indeed, reliability is crucial in a situation of geopolitical tension, where the information transmitted is extremely precise and sensitive. It is no coincidence that the protocol that best guarantees the integrity and reliability of transmitted information is TCP (Transport Control Protocol). This protocol is so essential to network functioning that it gives its name to the entire protocol suite.
The second objective of the TCP/IP suite is to facilitate the delivery of information to its intended destination despite the potential disruption of the internet network. IP (Internet Protocol) is the optimal protocol for identifying and tracking potential data transmission pathways between network components. This is why the entire internet suite inherited its name.
Originally implemented by the student communities of the most prestigious American universities, we can imagine that the first deployments and implementations of the internet were also motivated by a true enthusiasm for the advancement of research. In the rapid succession of new implementations and funding, the interests of students, researchers, and the American government found a point of convergence (O’Regan 2008, 179).
On the one hand, the fact that TCP/IP embodies the values and interests of two or more homogeneous groups of people demonstrates the political reach of the protocol suite, which can normalize public discourse by selecting some social and cultural patterns and excluding others. Conversely, the internet has been demonstrated to accommodate a multitude of disparate communities, notably including hackers and artists, each with their own values and perspectives. These communities appear to resist the trend toward the homogenization of practices that is embodied by TCP/IP, while still inhabiting the protocol suite (Galloway 2004, 150).
Galloway effectively demonstrates that this ostensible internal inconsistency in TCP/IP is, in fact, a reflection of the protocol suite’s aspirations for universal applicability. To substantiate its viability on a global scale, TCP/IP permits the continual renegotiation of its fundamental characteristics. Crucially, the transversality of TCP/IP is enabled by the fact that all entities participating in it are situated on the same epistemological plane. “Protocol is a solution to the problem of hierarchy,” Galloway writes further (Galloway 2004, 242). In this context, the notion of human life, located at the same level as matter, would be reduced to data and subsequently to packets of data. In a protocol society, the individual would thus be irretrievably split and deprived of substantial internal coherence (on this point, Galloway refers to Deleuze’s Postscript on the Societies of Control; see Deleuze 1992).
This redefinition of the notion of life may be indicative of a performativity of the TCP/IP protocol that is as revolutionary as it is harmful, reducing life to something essentially other than its nature. An alternative interpretation is that this redefinition is symptomatic of an evolving relationship between life and machines, which at present is being influenced by the TCP/IP protocol suite.
In support of this second hypothesis, it is notable that while the distributed protocol approach currently pervades our notion of life and places the distributed RNA system at its core (Galloway and Thacker 2013), throughout Western history, the definition of life, far from being detailed once and for all, has already been determined by notions of cosmos, atom, electrophysiology, system, organism, etc. (Tirard, Morange, and Lazcano 2010). The protocol approach undoubtedly transformed our conceptualization of the human being and the relationship between matter and life. Nevertheless, it did not negate the preexisting ontological, eternal, and absolute definitions of these two concepts. Such an eternal differentiation would, in fact, give rise to the issue of the status of protocols, which would once more be identified as mediators between two objects that are incommensurably different. It would be impossible to justify the hybrid nature of the protocol in the absence of the underlying compatibility between life and matter.
Conclusion
Understood as text (Marino 2020), execution (Chun 2008), or inscription (Benjamin 2019), code transcends its role as a set of instructions for machines, revealing itself as a cultural artifact deeply embedded with human values and cultures. And yet, rather than an object isolated from the rest of the technical system or a hybrid object capable of mediating between human representations and the norms imposed by machines, code exists as materialized in computer languages, markup languages, formats, algorithms, programs, protocols, etc.
Similarly, as computer devices and negotiating activities, computer protocols always exist in some form of material context. In this regard, our case study on the TCP/IP protocol suite has demonstrated that it is not possible to consider TCP/IP and its constituent protocols as technologies that are isolated from the respective cultural, physical, and administrative contexts in which these protocols are embedded.
Our hypothesis posits that the relationship between protocols and their implementation contexts cannot be traced to a singular, definitive directional causality. Rather, as the protocols are designed and implemented, a complex interplay of material and cultural configurations becomes evident, exerting a determining influence on their own conformation. Here, our approach diverges slightly from that of Galloway (Galloway 2004), who describes a predominantly unidirectional causal relationship, focusing primarily on the consequences of TCP/IP and its distributed management style on contemporary societies and our way of existing.
In alignment with the analytical methodologies proposed by Kittler (see Sebastian and Geerke 1990), Benjamin (Benjamin 2019), and Barad (Barad 2007), it is our hope that the observation of the dynamics that determine the existence of information technologies and other cultural, productive, and physical configurations will in future enable us to delineate a more precise new-materialistic hermeneutic for the interpretation of digital objects. Such a hermeneutic approach will be based on the premise that computer technologies are integral components of open systems, in which meaning is continually negotiated through material processes.
Competing interests
The author has no competing interests to declare.
Contributions
Editorial
Special Collection Editors
Jason Boyd, Toronto Metropolitan University, Canada
Bárbara Romero-Ferrón, Western University, Canada
Section, Copy, and Layout Editor
A K M Iftekhar Khalid, The Journal Incubator, University of Lethbridge, Canada
Copy and Production Editor
Christa Avram, The Journal Incubator, University of Lethbridge, Canada
References
Abbate, Janet. 1994. From ARPANET to Internet: A History of ARPA-Sponsored Computer Networks, 1966–1988. Philadelphia: University of Pennsylvania.
Allman, Mark, and Aaron Falk. 1999. “On the Effective Evaluation of TCP.” ACM SIGCOMM Computer Communication Review 29(5): 59–70. Accessed October 17, 2024. http://doi.org/10.1145/505696.505703.
Bachimont, Bruno. 2010. Le sens de la technique. Le numérique et le calcul. Paris: Les Belles Lettres.
Bajer, Anna. 2022. “Understanding of Source Code in Language: Contribution of Philosophical Hermeneutics to the Critical Code Studies.” Digital Scholarship in the Humanities 37(2): 307–320. Accessed October 17, 2024. http://doi.org/10.1093/llc/fqab089.
Barad, Karen. 2003. “Posthumanist Performativity: Toward an Understanding of How Matter Comes to Matter.” Signs 28(3): 801–831. Accessed October 17, 2024. http://doi.org/10.1086/345321.
Barad, Karen. 2007. Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning. Durham: Duke University Press.
Benjamin, Ruha. 2019. Race after Technology: Abolitionist Tools for the New Jim Code. Newark: Polity Press.
Bozalek, Vivienne, and Karin Murris. 2021. “Causality.” In A Glossary for Doing Postqualitative, New Materialist and Critical Posthumanist Research Across Disciplines, edited by Karin Murris: 40–41. Abingdon-on-Thames: Routledge.
Butler, Judith. 1997. Excitable Speech: A Politics of the Performative. New York: Routledge.
Cavallari, Peppe. 2017. “Les gestes dans l’environnement numérique. La ponctuation des affects.” Revue française des sciences de l’information et de la communication 11 (August). Accessed October 28, 2024. http://doi.org/10.4000/rfsic.2882.
Chun, Wendy. 2008. “On ‘Sourcery,’ or Code as Fetish.” Configurations 16 (September): 299–324. Accessed October 28, 2024. http://doi.org/10.1353/con.0.0064.
Deleuze, Gilles. 1992. “Postscript on the Societies of Control.” October 59: 3–7. Accessed October 20, 2024. https://www.jstor.org/stable/778828.
Doueihi, Milad. 2011. Pour un humanisme numérique. Paris: Seuil.
Edelman, Gerald M., and Giulio Tononi. 2001. A Universe of Consciousness: How Matter Becomes Imagination. New York: Basic Books.
Finn, Ed. 2017. What Algorithms Want: Imagination in the Age of Computing. Cambridge, MA: MIT Press.
Gadamer, Hans Georg. 2018. Verite et methode. Les grandes lignes d’une hermeneutique philosophique. Edition integrale. Paris: POINTS.
Galloway, Alexander R. 2004. Protocol: How Control Exists after Decentralization. Cambridge, MA: MIT Press.
Galloway, Alexander R., and Eugene Thacker. 2013. The Exploit: A Theory of Networks. Minneapolis: University of Minnesota Press.
Gamble, Christopher N., Joshua S. Hanan, and Thomas Nail. 2019. “What Is New Materialism?” Angelaki 24(6): 111–134. Accessed October 20, 2024. http://doi.org/10.1080/0969725X.2019.1684704.
Khalifa, Inas, and Ljiljana Trajkovic. 2004. “An Overview and Comparison of Analytical TCP Models.” In 2004 IEEE International Symposium on Circuits and Systems (ISCAS), V–V. Accessed October 20, 2024. http://doi.org/10.1109/ISCAS.2004.1329680.
Marino, Mark C. 2020. Critical Code Studies. Cambridge, MA: MIT Press.
McLuhan, Marshall. 1994. Understanding Media: The Extensions of Man. Cambridge, MA: MIT Press.
O’Regan, Gerard (Cornelius Gerard). 2008. A Brief History of Computing. New York: Springer.
Pitts-Taylor, Victoria. 2016. “Mattering. Feminism, Science, and Corporeal Politics.” In Mattering: Feminism, Science, and Materialism, edited by Victoria Pitts-Taylor, 1–20. New York: New York University Press. Accessed October 20, 2024. http://doi.org/10.18574/nyu/9781479833498.003.0001.
Ryan, Johnny. 2010. A History of the Internet and the Digital Future. London: Reaktion Books.
Sebastian, Thomas, and Judith Geerke. 1990. “Technology Romanticized: Friedrich Kittler’s Discourse Networks 1800/1900.” MLN 105(3): 583–595. Accessed October 20, 2024. http://doi.org/10.2307/2905075.
Simondon, Gilbert. 2012. Du mode d’existence des objets techniques. Paris: Aubier.
Sipser, Michael. 2012. Introduction to the Theory of Computation. Boston: Course Technology.
Tirard, Stephane, Michel Morange, and Antonio Lazcano. 2010. “The Definition of Life: A Brief History of an Elusive Scientific Endeavor.” Astrobiology 10(10): 1003–1009. Accessed October 20, 2024. http://doi.org/10.1089/ast.2010.0535.
Vitali-Rosati, Marcello. 2020. “Pour une théorie de l’éditorialisation.” Humanités Numériques 1 (January). Accessed October 21, 2024. http://doi.org/10.4000/revuehn.371.
Vitali-Rosati, Marcello. 2021. “Pour une pensée préhumaine. Humanités numériques, éditorialisation et métaontologie.” Sens public 1–20. Accessed October 21, 2024. http://doi.org/10.7202/1089654ar.
Wellbery, David. 1990. “Foreword.” Foreword to Discourse Networks, 1800/1900 by Friedrich Kittler. Translated by Michael Metteer and Chris Cullens. Redwood City, CA: Stanford University Press.