Thumbnail
⭐⭐⭐▫️▫️
Finished on: May 10, 2026
ibsn13: 9780253035738

Annotations

Kant notes how efforts in philosophy had run aground. For close to two thousand years, he argues, philosophers have been asking questions that they never quite seemed to get any traction on answering. So Kant, instead of trying to deal with and respond to the existing queries, games the system by changing the questions and the terms of the inquiry. As he described it, “Hitherto it has been assumed that all our knowledge must conform to objects. But all attempts to extend our knowledge of objects by establishing something in regards to them a priori, by means of concepts, have, on this assumption, ended in failure. We must therefore make trial whether we may not have more success in the task of metaphysics, if we suppose that objects must conform to knowledge (Kant 1965, B xvi).

As Daniel Dennett (1996, vii) once explained (and in the first lines of the preface to one of his books): “I am a philosopher, not a scientist, and we philosophers are better at questions than answers. I haven’t begun by insulting myself and my discipline, in spite of first appearances. Finding better questions to ask, and breaking old habits and traditions of asking, is a very difficult part of the grand human project of understanding ourselves and our world.”

This attention to words is not just about playing around. It is serious philosophical business, especially in the wake of what has been called “the linguistic turn.” For those unfamiliar with the phrase, it denotes a crucial shift in perspective, when philosophers came to realize that the words they use to do their work were themselves something of a philosophical problem. Despite the fact that this turning point is routinely situated in the early twentieth century, it is not necessarily a recent innovation. The turn to language has, in fact, been definitive of the philosophical enterprise from the very beginning—or at least since the time of Plato’s Phaedo.

Now perhaps my metaphor is not quite accurate, for I do not grant in the least that he who investigates things in λόγος is looking at them in images any more than he who studies them in the facts of daily life” (Plato 1990, 100a). What Socrates advocated, therefore, is not something that would be simply opposed to what is often called “empirical knowledge.” Instead he promoted an epistemology that questions what Briankle Chang (1996, x) calls the “naïve empiricist picture”—the assumption that things can be immediately grasped and known outside the concepts, words, and terminology that always and already frame our way of looking at them. In other words, Socrates recognized that the truth of “things” is not simply given or immediately available to us in its raw or naked state. What these things are and how we understand them is something that is, at least for our purposes, always mediated through some kind of logical process by which they come to be grasped and conceptualized as such. In other words, words matter.

In trying to ascertain “what kind of correctness that is which belongs to names” (Plato 1977, 391b), Socrates discovers what we have already seen in the taxonomy just provided—a seemingly endless chain of reference, where one term is substituted for or used to explain the other terms. This insight, which is often attributed to structural linguistics and the innovations of Ferdinand de Saussure, is something that is, as Jay David Bolter (1991, 197) points out, immediately apparent to anyone familiar with the design and function of a dictionary: “For the dictionary has always been the classic example of the semiotic principle that signs refer only to other signs. . . . We can only define a sign in terms of other signs of the same nature. This lesson is known to every child who discovers that fundamental paradox of the dictionary: that if you do not know what some words mean you can never use the dictionary to learn what other words mean. The definition of any word, if pursued far enough through the dictionary, will lead you in circles.” The principle challenge, then, is to devise some way to put a stop to this potentially infinite regress of names and naming.

In the Cratylus this decision, which is quite literally a decisive cut or de-cision that interrupts the potentially infinite substitution of terms, is determined to be the purview of what Socrates calls the “name maker” or “law giver” (Plato 1977, 388e). The series of terminological substitutions is therefore terminated, as Slavoj Žižek (2008a, 95) describes it, “through the intervention of a certain ‘nodal point’ (the Lacanian point de capiton) which ‘quilts’ them, stops their sliding and fixes their meaning.” This nodal point or knot in the network of interchangeable terms has been called the “master signifier”—a signifier that, although essentially no different from any other sign in the group, is situated in such a way that it masters the entire sequence of terminological substitutions by providing a kind of final reference or guarantee of meaning. This transpires not because of some natural and essential connection between this name and the thing it names but by way of a deliberate and contingent decision, or what Žižek (1992, 119) has called “an abyssal, nonfounded, founding act of violence.” Someone or some group—what Socrates calls the “name-giver” or “the law maker”—asserts one particular term over and against all the others as the universal one that encompasses or masters the entire field. “It is the point at which one can only say that the ‘buck stops here’; a point at which, in order to stop the endless regress, somebody has to say, ‘It is so because I say it is so!’” (Žižek 2008b, 62).

Consequently, the choice of words—whether “virtual world,” “videogame,” MMORPG, or any of the other designations—is never neutral. Like the frame of a camera, the choice of a particular moniker frames certain objects, problems, and possibilities by situating them within the field of vision; but doing so also unavoidably excludes other aspects and opportunities from further consideration by locating them in the margins or just outside the field of vision.

This is precisely the theoretical insight that is developed and mobilized in the “linguistic turn.” Language, on this account, is not simply a set of prefabricated, ready-to-hand words that can be organized and combined together in order to say something intelligible about things in the world—things that presumably exist and have been presented prior to becoming represented in language. Instead language participates in shaping and making this reality accessible in the first place.

Or as James Carey (1992, 25) cleverly describes it by way of Kenneth Burke, “words are not the names for things . . . things are the signs of words.” Understood in this fashion, different words reveal different things or different aspects of things.

In his seminal book, Tractatus Logico-Philosophicus, for instance, Ludwig Wittgenstein famously argued that “the limits of my language mean the limits of my world” (1995, 149), indicating that the world one knows and operates in is shaped, formed, and delimited by the words that one has at his or her disposal.

This attentiveness to words is something that is perhaps best summarized by one of the more famous (or infamous) statements of Jacques Derrida (1976, 158): “il n’y a pas de hors-texte” or “there is nothing outside the text.” What he meant by this, however, is not what many critics have assumed or accused him of saying: “That does not mean that all referents are suspended, denied, or enclosed in a book, as people have claimed, or have been naïve enough to believe and to have accused me of believing. But it does mean that every referent, all reality has the structure of a differential trace, and that one cannot refer to this ‘real’ except in an interpretive experience” (Derrida 1993, 148). Working with this particular insight requires a methodology that is designed for and that can scale to this opportunity and challenge. This is what is called “deconstruction.”

As Derrida (1993) himself has said quite explicitly (and on more than one occasion), “the de- of deconstruction signifies not the demolition of what is constructing itself, but rather what remains to be thought beyond the constructionist or destructionist schema” (147, emphasis mine)

As Mark Dery (1996, 244) explains it: “Western systems of meaning [what Derrida calls “metaphysics”] are underwritten by binary oppositions: body/soul, other/self, matter/spirit, emotion/reason, natural/artificial, and so forth. Meaning is generated through exclusion: The first term of each hierarchical dualism is subordinated to the second, privileged one.” In other words, human beings tend to organize and make sense of the world through terminological differences or conceptual dualities, such as mind and body, male and female, good and bad, being and nothing, and so on. And the field of game studies is no exception—in fact, it is exemplary. “Many discussions in game studies,” as Nicholas Ware (2016, 168) explains, “have centered on binaries. In the 1990s, much was made of the real vs. the virtual. In the oughts, narratology vs. ludology.” Other influential conceptual distinctions in the field include: game versus player (Voorhees 2013, 19), work versus play (Calleja 2012), hardware versus software (Aslinger and Huntemann 2013, 3), and casual versus hardcore (Leaver and Willson 2016). For this reason, as Juul (2005, 11) concludes, “video game studies has so far been a jumble of disagreements and discussions with no clear outcome. . . . The discussions have often taken the form of simple dichotomies, and though they are unresolved, they remain focal points in the study of games.”

For any of these conceptual opposites, the two terms have not typically been situated on a level playing field; one of the pair is already determined to have the upper hand. Or as Derrida characterizes, “We are not dealing with the peaceful coexistence of a vis-à-vis, but rather with a violent hierarchy” (Derrida 1981, 41). In the conceptual duality of real versus virtual, for example, the two terms have not been equal partners. The former already has a presumed privilege over the later, and this privilege is perhaps best illustrated in The Matrix films. Early in the first episode of this cinematic trilogy, Morpheus offers Neo a choice between two pills—a red pill that leads to an authentic life in the real world and a blue pill that will keep one enslaved in the computer-generated virtual reality of the Matrix.4 In the face of these competing options, Neo does what appears to be the “right thing”; he reaches out and takes the red pill. Like the prisoner in Plato’s “Allegory of the Cave,” Neo selects truth as opposed to illusion, reality as opposed to fiction, and the real world as opposed to the virtuality of projected images.5

In order to accomplish this, deconstruction consists of a complicated double gesture or what Derrida also calls “a double science.” This two-step procedure necessarily begins with a phase of inversion, where a particular duality or conceptual opposition is deliberately overturned by siding with the traditionally deprecated term. This is, quite literally, a revolutionary gesture insofar as the existing order is inverted or turned around. But this is only half the story. This conceptual inversion, like all revolutionary operations—whether social, political, or philosophical—actually does little or nothing to challenge the dominant system. In merely exchanging the relative positions occupied by the two opposed terms, inversion still maintains the conceptual opposition in which and on which it operates—albeit in reverse order. This can be illustrated, once again, in the Matrix trilogy by way of the character of Cypher. Cypher is a member of Morpheus’s crew, who, after experiencing life in the real world, decides to return to the computer-generated fantasies of the Matrix. Cypher therefore opts for the blue pill. In being portrayed in this fashion, the character of Cypher functions as Neo’s dramatic foil; he is, as Frentz and Rushing (2002, 68) characterize it using digital notation, “the 0 to Neo’s 1.” In deciding to return to the computer-generated fantasies of the Matrix, however, Cypher simply inverts the ruling conceptual opposition and continues to operate within and according to its logic.6 Simply turning things around, as Derrida (1981, 41) concludes, still “resides within the closed field of these oppositions, thereby confirming it.”

For this reason, deconstruction necessarily entails a second, postrevolutionary phase or operation. “We must,” as Derrida (1981, 42) describes it, “also mark the interval between inversion, which brings low what was high, and the irruptive emergence of a new ‘concept,’ a concept that can no longer be, and never could be, included in the previous regime.” Strictly speaking, this new “concept” is no concept whatsoever, for it always and already exceeds the system of dualities that define the conceptual order as well as the nonconceptual order with which the conceptual order has been articulated (Derrida 1982, 329). This “new concept” (that is, strictly speaking, not really a concept) is what Derrida calls an undecidable. It is first and foremost, that which “can no longer be included within philosophical (binary) opposition, but which, however, inhabits philosophical opposition, resisting and disorganizing it, without ever constituting a third term, without ever leaving room for a solution in the form of speculative dialectics” (Derrida 1981, 43).

The undecidable new concept occupies a position that is in between or in or at the margins of a traditional, conceptual opposition—a binary pair. It is simultaneously neither-nor and either-or. It does not resolve into one or the other of the two terms that comprise the conceptual order, nor does it constitute a third term that would mediate their difference in a synthetic unity, a la Hegelian or Marxian dialectics. Consequently, it is positioned in such a way that it both inhabits and operates in excess of the conceptual oppositions by which and through which systems of knowledge have been organized and articulated. It is for this reason that the new concept cannot be described or marked in language, except (as is exemplified here) by engaging in what Derrida (1981, 42) calls a “bifurcated writing,” which compels the traditional philosophemes to articulate, however incompletely and insufficiently, what necessarily resists and displaces all possible modes of articulation.

Perhaps the best illustration of deconstruction’s two-step operation is available in the term “deconstruction” itself. In a first move, deconstruction flips the script by putting emphasis on the negative term “destruction” as opposed to “construction.” In fact, the apparent similitude between the two words, “deconstruction” and “destruction,” is a deliberate and calculated aspect of this effort. But this is only step one. In the second phase of this double science, deconstruction introduces a brand-new concept. The novelty of this concept is marked, quite literally, in the material of the word itself. “Deconstruction,” which is fabricated by combining the de– of “destruction” and attaching it to the opposite term, “construction,” is a neologism that does not quite fit in the existing order of things. It is an exorbitant and intentionally undecidable alternative that names a new possibility. This new concept, despite its first appearances, is not the mere polar opposite of construction; rather, it exceeds the conceptual order instituted and regulated by the terminological opposition situated between construction and destruction.

As Derrida (1993, 141) explains, “deconstruction does not exist somewhere, pure, proper, self-identical, outside of its inscriptions in conflictual and differentiated contexts; it ‘is’ only what it does and what is done with it, there where it takes place.” Consequently, “there is no one single deconstruction,” but only specific and irreducible instances in which deconstruction takes place. Unlike a method that can be generalized in advance of its particular applications, deconstruction comprises a highly specific form of critical intervention that is context dependent.

This means that deconstruction is less a method—a road to be followed—and more of what Ciro Marcondes Filho has called metáporo. According to Marcondes Filho (2013, 58), a method is, “by definition, a pre-mapped path that the researcher needs to follow.” It is, therefore, generally “fixed, rigid, and immutable” (58). By contrast, metáporo, a neologism introduced by Marcondes Filho, is more flexible and responsive to the particular: “If on the contrary, one opts for a procedure that follows its object and accompanies it in its unfolding, this opens a way, a ‘poros’ or a furrow, like a boat that cuts through the water without creating tracks. With metáporo, the object follows its own way and we accompany it without previous script, without a predetermined route, living in what happens while pursing the investigation” (Marcondes Filho 2013, 58).

The Human Use of Human Beings (1988), Wiener writes the following: “It is the thesis of this book that society can only be understood through a study of the messages and the communication facilities which belong to it; and that in the future development of these messages and communication facilities, messages between man and machines, between machines and man, and between machine and machine, are destined to play an ever-increasing part” (16). In the social relationships of the future (we need to recall that Wiener wrote these words in 1950), the machine will no longer comprise a mere instrument or medium through which human users communicate and socialize with each other. Instead, it will increasingly occupy the position of another social actor with whom one communicates and interacts.

Leading poststructuralists [like Derrida],” Mark Taylor (1997, 269) notes, “realize that, since they remain unavoidably entangled in the systems and structures they resist, the task of criticism is endless.” For this reason, the activity of deconstruction is not, strictly speaking, ever able to be finished with its work or to achieve final closure. As Derrida has described it (1981, 41), the end of deconstruction—“end” understood as the goal or objective of the activity—is to be “an interminable analysis.” The final chapter takes up and deals with the meaning of this rather cryptic phrase, producing a kind of inconclusive conclusion.

For I stand tonight facing west on what was once the last frontier. From the lands that stretch three thousand miles behind me, the pioneers of old gave up their safety, their comfort and sometimes their lives to build a new world here in the West. . . . Today some would say that those struggles are all over—that all the horizons have been explored—that all the battles have been won—that there is no longer an American frontier. . . . But I tell you the New Frontier is here, whether we seek it or not. Beyond that frontier are the uncharted areas of science and space, unsolved problems of peace and war, unconquered pockets of ignorance and prejudice, unanswered questions of poverty and surplus. . . . I am asking each of you to be pioneers on that New Frontier. (Kennedy 1960)

According to Gary Gygax, the inventor of the game, D&D occupies the place of the frontier during the closing decades of the twentieth century: “Our modern world has few, if any, frontiers. We can no longer escape to the frontier of the West, explore the Darkest Africa, sail to the South Seas. Even Alaska and the Amazon Jungles will soon be lost as wild frontier areas. . . . It is therefore scarcely surprising that a game which directly involves participants in a make-believe world of just such a nature should prove popular” (Gygax 1979, 29; quoted in Fine 1983, 55). Gygax, like Turner (1894), perceived the closing of the geophysical frontier and, like Cooper (2000), situates the RPG as a new frontier—a new world that is open for exploration, adventure, and settlement.

“Is the real world grating on you, with its wars, overheated summers, and incessant Tom Cruise updates? Just hop online and create a digital you that lives in a utopian cyber-realm.” Even in those circumstances where the assessment is more measured, utopianism is still the operative category. Grey Drane (2007, 1), for instance, is not ready to call Second Life utopia, but he still finds it connected to and involved with utopian ideas. “OK, I’m not suggesting that utopia can be achieved in Second Life, but it might be the kind of environment in which you could play around with what the word ‘utopia’ might actually mean.”

the real world csn’t ttuly be escaped. It’s alwsys wsiting for you on the other side.

“The New World,” as Fuentes (1999, 195) argues, “became a nightmare as colonial power spread and its native peoples became the victim of colonialism, deprived of their ancient faith and their ancient lands and forced to accept a new civilization and a new religion. The Renaissance dream of a Christian Utopia in the New World was also destroyed by the harsh realities of colonialism: plunder, enslavement, genocide.”

“New technologies are,” Simon Penny (1994, 231) argues, “often heralded by a rhetoric that locates them as futuristic, without history, or at best arising from a scientific-technical lineage quite separate from cultural history.” New technology, ICT in particular, is often characterized as radically distinct and different from anything that came before, providing for a significant break with tradition that facilitates an easy escape from both cultural context and history. Even though technology is always the product of a specific culture and introduced at a specific time for a particular purpose, the futuristic rhetoric that surrounds technological innovation allows for this context to be set aside, ignored, or simply forgotten. As Ken Hillis (1999, xvii) summarizes it, “Cyberspace and VR are, respectively, a frontier metaphor and a technology offering both the promise of an escape from history with a capital H, and the encrusted meanings it contains, and an imaginary space whereby to perform, and thereby possibly exorcise or master, difficult real-world historical and material situations.”

The native peoples of South and North America, for instance, account for the so-called “age of discovery” and the settling of the American West with an entirely different and much less optimistic interpretation. This is particularly evident in critical reassessments of the dominant historical narratives as provided by scholars Tzvetan Todorov (1984), Berry Lopez (1992), and Carlos Fuentez (1999); by performance artists Coco Fusco and Guillermo Gómez-Peña (Fusco, 1995); and efforts to write alternative histories like those formulated by Jonathan Hill (1988), Alvin Josephy (1993; 2007), and Francis Jennings (1994). Deploying the grand narratives of exploration, colonization, and settlement as if they were somehow beyond reproach and universally applicable has the effect of normalizing a culture’s experiences and asserting them over and against all other alternatives. This is not only presumptuous; it is the ethnocentric gesture par excellence—one assumes that his or her experience is normative, elevates it to the position of a universal, and imposes it upon others (Gunkel 2001, 34). In using the terms “new world” and “frontier,” game developers, players, and researchers impose a distinctly Euro-American understanding, colonizing both the idea and the technology of the virtual world.6

“I would speculate,” Mary Fuller writes, “that part of the drive behind the rhetoric of virtual reality as a New World or new frontier is the desire to recreate the Renaissance encounter with America without guilt: this time, if there are others present, they really won’t be human (in the case of Nintendo characters), or if they are, they will be other players like ourselves, whose bodies are not jeopardized by the virtual weapons we wield” (Fuller and Jenkins 1995, 59). Understood in this way, computer technology simulates new territories to explore, conquer, and settle without the principal problem that has come to be associated with either the European conquest of the Americas or the westward expansion of the United States. Unlike the continents of North and South America, these new worlds are not previously inhabited. “Plenty of humans,” Castronova (2007, 63) points out, “lived in the allegedly New World happened upon by Christopher Columbus. Not so with new virtual worlds. On the day of launch, these are truly newly created terrains that no human has yet explored.”9 MMORPGs, then, reengineer or reprogram the concept of the New World, retaining all the heroic aspects of exploration and discovery while stripping away the problems that have historically complicated the picture. As I had previously argued, “The terra nova of cyberspace is assumed to be disengaged from and unencumbered by the legacy of European colonialism, because cyberspace is determined to be innocent and guiltless. What distinguishes and differentiates the utopian dreams of cyberspace from that of the new world is that cyberspace, unlike the Americas, is assumed to be victimless” (Gunkel 2001, 44).

Second, the virtual world, as Sardar (1996, 19) reminds us, “does have real victims.” These victims are not situated within the space of the game; rather, they are those others who cannot, for numerous reasons, participate. Although RPGs, MMOs and MMORPGs offer “everyone,” as Castronova claims, the opportunity to find “the best possible place to be,” there are others—the majority of humanity, in fact—who do not have a choice in the matter. That is, the place where they find themselves is not something that they actively select or have the ability to change. The decision to migrate to a virtual world or not, which is often presented as if it were simply a matter of personal preference, is a privilege that only a small percentage of the world’s people get to consider. As Olu Oguibe (1995, 3) describes it, “despite our enthusiastic efforts to redefine reality, to push the frontiers of experience and existence to the very limits, to overcome our own corporeality, to institute a brave new world of connectivities and digital communities, nature and its structures and demands still constitute the concrete contours of reality for the majority of humanity.”

For the victims of colonial conquest, then, the virtual world presents something of a double whammy. Not only do the events of new world conquest and frontier settlement conjure up less than pleasant memories for indigenous and aboriginal peoples, but many of these populations are currently situated on the “information have-nots” side of the digital divide. To put it in rather blunt terms, the message is this: “Listen, we understand that what we thought to be a new world and frontier didn’t go so well for you folks, and we really regret that whole genocide thing. That was clearly a mistake, but we can just forget about all that. This time, we’re going to get it right, because this time we have excluded you people from the very beginning.”

The best we can do—the best we can ever do—is to remain critically aware of this fact and to understand how the very words we employ to describe technology already shape, influence, and construct what it is we think we are merely describing

Anthropomorphisng LLMs

Any language is already shaped by the sediment of its own culture and history. This is simultaneously the source of its explanatory power and a significant liability. The best we can do—the best we can ever do—is to remain critically aware of this fact and to understand how the very words we employ to describe technology already shape, influence, and construct what it is we think we are merely describing. This is, as James Carey (1992, 29) explained it, the “dual capacity of symbolic forms: as ‘symbols of’ they present reality; as ‘symbols for’ they create the very reality they present.” Consequently, the critical issue is to learn to deploy language self-reflectively, knowing how the very words we use to characterize a technological innovation are themselves part of an ongoing struggle over the way we understand the technology and frame its significance.

Anthropomorphising LLMs

As Martin Heidegger (1962, 257) reminds us, “There are three theses which characterize the way in which the essence of truth has been traditionally taken and the way it is supposed to have been first defined: (1) that the ‘locus’ of truth is the statement (judgment); (2) that the essence of truth lies in the ‘agreement’ of the judgment with its object; (3) that Aristotle, the father of logic, not only has assigned truth to the judgment as its primordial locus but has set going the definition of ‘truth’ as ‘agreement.’” According to this characterization, truth is not something that resides in objects but is located in statements about objects. In other words, truth is not “out there” to be discovered in things but is essentially a relative concept. It subsists in the agreement or correspondence between a statement about something, what is commonly called a “judgment,” and the real thing about which the statement is made. Heidegger (1962, 260) illustrates this with a simple example: “Let us suppose that someone with his back turned to the wall makes the true statement that ‘the picture on the wall is hanging askew.’ This statement demonstrates itself when the man who makes it, turns around and perceives the picture hanging askew on the wall.” The truth of the statement, “the picture is hanging askew,” is evaluated by “turning around” and comparing the content of the statement to the real object. If the statement agrees with or corresponds to the real thing, then it is true; if not, it is false. According to Heidegger’s analysis (1962, 184), this particular understanding of truth—truth as agreement or correspondence—dominates “the history of Western humanity” and can therefore be found throughout the Western philosophical and scientific traditions.16

This term first appears in Turkle (1995). It recurs in many of her published writings, including the 2011 book Alone Together: Why We Expect More from Technology and Less from Each Other.

As Brian Christian (2011, 37) points out, “the Turing test is, at bottom, about the act of communication.” This is not a capricious decision. There are good epistemological reasons for focusing on this particular capability, and it has to do with what philosophers routinely call “the problem of other minds”—the seemingly undeniable fact that we do not have direct access to the inner workings of another’s mind. “How does one determine,” as Paul Churchland (1999, 67) famously characterized it, “whether something other than oneself—an alien creature, a sophisticated robot, a socially active computer, or even another human—is really a thinking, feeling, conscious being; rather than, for example, an unconscious automaton whose behavior arises from something other than genuine mental states?” Attempts to resolve or at least respond to this problem inevitably involve some kind of behavioral demonstration or test, like Turing’s game of imitation. “To put this another way,” Roger Schank (1990, 5) concludes, “we really cannot examine the insides of an intelligent entity in such a way as to establish what it actually knows. Our only choice is to ask and observe.”

“Computers, in the way that they communicate, instruct, and take turns interacting, are close enough to human that they encourage social responses. The encouragement necessary for such a reaction need not be much.[7] As long as there are some behaviors that suggest a social presence, people will respond accordingly. . . . Consequently, any medium that is close enough will get human treatment, even though people know it’s foolish and even though they likely will deny it afterwards.” The CASA model, which was developed in response to numerous experiments with human subjects, describes how users of computers, irrespective of the actual intelligence possessed by the machine, tend to respond to the technology as another socially aware and interactive subject. In other words, even when experienced users know quite well that they are engaged with using a machine, they make what Reeves and Nass (1996, 22) call the “conservative error” and tend to respond to it in ways that afford this other thing social standing on par with another human individual. Consequently, in order for something to be recognized and treated as another social actor, “it is not necessary,” as Reeves and Nass (1996, 28) conclude, “to have artificial intelligence,” strictly speaking. All that is needed is that they appear to be “close enough” to encourage some kind of social response. And this is where things get interesting (or challenging) with regard to questions concerning social standing and how we can or should respond to others (and these other forms of otherness).

LLMs

Jean-François Lyotard in The Postmodern Condition: “Technical devices originated as prosthetic aids for the human organs or as physiological systems whose function it is to receive data or condition the context. They follow a principle, and it is the principle of optimal performance: maximizing output (the information or modification obtained) and minimizing input (the energy expended in the process). Technology is therefore a game pertaining not to the true, the just, or the beautiful, etc., but to efficiency: a technical ‘move’ is ‘good’ when it does better and/or expends less energy than another” (Lyotard 1984, 44)

As Andreas Matthias points out, summarizing his survey of learning automata:

Presently there are machines in development or already in use which are able to decide on a course of action and to act without human intervention. The rules by which they act are not fixed during the production process, but can be changed during the operation of the machine, by the machine itself. This is what we call machine learning. Traditionally we hold either the operator/manufacture of the machine responsible for the consequences of its operation or “nobody” (in cases, where no personal fault can be identified). Now it can be shown that there is an increasing class of machine actions, where the traditional ways of responsibility ascription are not compatible with our sense of justice and the moral framework of society because nobody has enough control over the machine’s actions to be able to assume responsibility for them. (Matthias 2004, 177)

“The idea that we humans would one day share the Earth with a rival intelligence,” Philip Hingston (2014) writes, “is as old as science fiction. That day is speeding toward us. Our rivals (or will they be our companions?) will not come from another galaxy, but out of our own strivings and imaginings. The bots are coming: chatbots, robots, gamebots.” In the face of these other (seemingly) socially aware and interactive others we will need to ask ourselves some important questions. “Will we,” as Hingston (2014, v) articulates it, “welcome them, when they come? Will bots have human friends? Will we grant them rights?” In response to these questions, there appears to be at least three options available to us, none of which are entirely comfortable or satisfactory.

“My thesis is that robots should be built, marketed and considered legally as slaves, not companion peers” (Bryson 2010, 63). Although this might sound harsh, this argument (which was initially formulated for physically embodied robots, but could also be applied to software-based AI systems) is persuasive, precisely because it draws on and is underwritten by the instrumental theory of technology—a theory that has considerable history and success behind it and that functions as the assumed default position for any and all considerations of technology. This decision—and it is a decision, even if it is the default setting—has both advantages and disadvantages. On the positive side, it reaffirms human exceptionalism, making it absolutely clear that it is only the human being who possess rights and responsibilities. Technologies—no matter how sophisticated, intelligent, and social—are and will continue to be mere tools of human action, nothing more. If something goes wrong because of the actions or inactions of a bot, there is always some human person who is ultimately responsible for what happens. Finding that person (or persons) may require sorting through layer upon layer of technological mediation, but there is always someone—specifically some human someone—who is responsible. This line of reasoning seems to be entirely consistent with current legal structures and decisions. “As a tool for use by human beings,” Matthew Gladden (2016) argues, “questions of legal responsibility . . . revolve around well-established questions of product liability for design defects (Calverley 2008, 533; Datteri 2013) on the part of its producer, professional malpractice on the part of its human operator, and, at a more generalized level, political responsibility for those legislative and licensing bodies that allowed such devices to be created and used” (Gladden 2016, 184).

Conversely, we can entertain the possibility of “machine ethics” just as we had previously done for other nonhuman entities, like animals (Singer 1975). And there has, in fact, been a number of recent proposals addressing this opportunity. Wallach and Allen (2009, 4), for example, not only predict that “there will be a catastrophic incident brought about by a computer system making a decision independent of human oversight,” but they use this fact as justification for developing “moral machines,” advanced technological systems that are able to respond to morally challenging situations. Michael Anderson and Susan Leigh Anderson (2011) take things one step further. They not only identify a pressing need to consider the moral responsibilities and capabilities of increasingly autonomous systems but have even suggested that “computers might be better at following an ethical theory than most humans,” because humans “tend to be inconsistent in their reasoning” and “have difficulty juggling the complexities of ethical decision-making” owing to the sheer volume of data that need to be taken into account and processed (Anderson and Anderson 2007, 5).

bots that are designed to follow rules and operate within the boundaries of some kind of programmed restraint might turn out to be something other than what is typically recognized as a responsible agent. Terry Winograd (1990, 182–183), for example, warns against something he calls “the bureaucracy of mind,” “where rules can be followed without interpretive judgments. . . . When a person views his or her job as the correct application of a set of rules (whether human-invoked or computer-based), there is a loss of personal responsibility or commitment. The ‘I just follow the rules’ of the bureaucratic clerk has its direct analog in ‘That’s what the knowledge base says.’ The individual is not committed to appropriate results, but to faithful application of procedures.” Coeckelbergh (2010, 236) paints a potentially more disturbing picture. For him, the problem is not the advent of “artificial bureaucrats” but “psychopathic robots.” The term “psychopathy” has traditionally been used to name a kind of personality disorder characterized by an abnormal lack of empathy that is masked by an ability to appear normal in most social situations. The functional morality, like that specified by Wallach and Allen and Anderson and Anderson, seeks to design and produce what are arguably “artificial psychopaths”—bots that have no capacity for empathy but which follow rules and in doing so can appear to behave in socially appropriate ways. These psychopathic mechanisms would, Coeckelbergh (2010, 236) argues, “follow rules but act without fear, compassion, care, and love. This lack of emotion would render them non-moral agents—that is, agents that follow rules without being moved by moral concerns—and they would even lack the capacity to discern what is of value. They would be morally blind.”

This assumption has deep philosophical roots, going back at least to the work of René Descartes, where spoken discourse was identified as uniquely human and the only certain method by which to differentiate the rational human subject from ostensibly mindless animals and automatons. If one were, for example, confronted with a cleverly designed machine that looked and behaved like a human being, there would be, Descartes (1988, 44–45) argues, at least one very certain means of recognizing that these artificial figures are in fact machines and not real men: “They could never use words, or put together other signs, as we do in order to declare our thoughts to others. For we can certainly conceive of a machine so constructed that it utters words, and even utters words which correspond to bodily actions causing a change in its organs (e.g., if you touch it in one spot it asks what you want of it, if you touch it in another it cries out that you are hurting it, and so on). But it is not conceivable that such a machine should produce different arrangements of words so as to give an appropriately meaningful answer to whatever is said in its presence, as the dullest of men can do.” Turing’s game of imitation leverages this Cartesian tradition and turns it back on itself. If, in fact, a machine is able, as Descartes wrote, “to produce different arrangements of words so as to give an appropriately meaningful answer to whatever is said in its presence,” then we would, Turing argues, have to conclude that it was just as much a thinking rational agent as another human being.

virtual distinction do not do much better. Take, for example, Luciano Floridi’s proposal designated by the neologism “onlife.” If one examines, Floridi (2014, 43) argues, the way digital technology and the user experience has evolved over time, the prevailing conceptual distinction that has, as Lévy pointed out, differentiated “online” virtual experience from “real life” no longer seems to be an appropriate way to characterize how we actually live, work, and play in the twenty-first century.

With interfaces becoming progressively less visible, the threshold between here (analogue, carbon-based, offline) and there (digital, silicon-based, online) is fast becoming blurred, although this is as much to the advantage of the there as it is to the here. To adapt Horace’s famous phrase, “the captive infosphere is conquering its victor.” The digital online world is spilling over into the analogue-offline world and merging with it. This recent phenomenon is variously known as “Ubiquitous Computing,” “Ambient Intelligence,” “The Internet of Things,” or “Web-augmented things.” I prefer to refer to it as the onlife experience. It is, or will soon be, the next stage in the development of the information age. We are increasingly living onlife. (emphasis in the original)

Mark Coeckelbergh’s New Romantic Cyborgs (2017).

None of the art/writing on this site was generated by LLMs.
If you'd like to get in touch with me, check out my about page.
Follow me with an RSS reader to be notified when I write something.