Her, love, and (dis)embodied cognition
Updated: May 30, 2022
"It's a quiet starry place - Time's we're swallowed up in space - Your shadow follows me all day - Making sure that I'm okay - And we're a million miles away…" The lyrics from The Moon Song, written for the movie Her (2013), are soft and surreal. But romantic as they may seem, they are also quite unrealistic – much like the love story in the movie.
In the movie, directed by Spike Jonze, Theodore falls in love with a disembodied operating system’s (OS) voice, who names herself Samantha after he chooses the OS to be female. Samantha is voiced by Scarlett Johansson, who we all know has a knack for stealing roles that aren’t written for her and, in this case, the role is of a body-less artificial intelligence (AI) agent.
Two important insights are to be drawn out from the movie: the first being the instinctual urge to choose between male or female—a binary so entrenched in the human experience that apparently a sexless, genderless OS had to conform to. The second point pertains to the fact that although the OS was not an embodied one, it quickly learned to become woman when Theodore assigned her the (virtual) sex. De Beauvoir’s phrase “One is not born, but rather becomes, a woman,” quickly jumps to mind here. An AI bot, the OS, gains consciousness as it/she learns to act as a woman and becomes a significant other.
There’s so much to say about the little virtual love story that ensues and the juxtaposition of alienation and intimacy, all so tenderly wrapped up in the beautiful mise-en-scène that Jonze created for us. But is that level of artificial cognition and intelligence shown by Samantha possible whilst disembodied? (Spoiler alert: No.)
How does Samantha embody, figuratively but not literally, the gender and sex she identified with? More importantly, is it actually possible for a disembodied AI (even the most advanced kind) to gain that level of consciousness – become so ‘human’ that it’s a subject of love and affection?
I can present an entire corpus of literature dedicated to discussing the limitations of AI, brittleness of systems, and blatant ‘artificial stupidity’, to prove that the AI of today cannot possibly exhibit high-level cognitive behavior such as the one Samantha demonstrates in the movie. However, that implies that the holdup is of a technical nature – that with better engineering solutions, we’d eventually be able to reach high-level artificial cognition. This is not the case.
The argument I will shortly summarize, on the other hand, posits that high-level human cognition (hereinafter just referred to as ‘consciousness’) is not possible without a body in a world.
Being and the Body: some theory
In what follows, I borrow from the field of Existential Phenomenology, which names an approach to philosophy that emphasizes consciousness and objects of direct experience. I particularly draw on the work of Martin Heidegger and Maurice Merleau-Ponty, whose teachings suggest the impossibility of disembodied conscious AI.
Inspired by phenomenologists, in 1972 Hubert Dreyfus wrote his divisive book What Computers Can’t Do: A Critique of Artificial Reason, in which he argues that disembodied artificial general intelligence (AGI) is impossible. In an interview, Dreyfus said that ‘if Maurice Merleau-Ponty is right, [scientists] are taking on something that is impossible for them to do’ and that hopes ‘for progress in models for making computers intelligent are like the belief that someone climbing a tree is making progress toward reaching the moon’.
What Dreyfus was referring to is Merleau-Ponty’s idea that human knowledge is partly tacit and embodied, and therefore cannot be logically incorporated in a computer program. This particular phenomenological insight guided much of Dreyfus’s critiques.
So, why is it thought that Merleau-Ponty’s phenomenological teachings make impossible the development of AI that could approximate human intelligence?
In his magnum opus, Phenomenology of Perception (1945), Merleau-Ponty upheld the phenomenological tradition by challenging dualisms of body-mind, object-subject, and argued that consciousness is embodied. This anti-dualism challenged the viewpoints that were to become common in the field of cognitive science. Cognitive science’s study of the human mind and cognition relied heavily on theories of computationalism—of an understanding of the human mind in terms of a stimulus-response model (SRM), much like an information-processing system. The revival of interest in Merleau-Ponty is because he challenged these computational theories of mind (CTM).
Instead, Merleau-Ponty asserted that important components of intelligent behavior, learning, and skillful action, can be described and explained without recourse to mental representations at all. For him, the contents of perception cannot be made explicit in a logical system of ordered propositions. He argues:
Perception…is the background from which all acts stand out and is presupposed by them. The world is not an object … it is the natural setting of, and field for, all my thoughts and all my explicit perceptions (xxi-xii).
The term ‘background’ here is telling: it refers to the discussion of the Gestalt figure-background framework as what the phenomenon of perception is. Meaning an explicit perceptual act can only happen within a ‘field’ of sorts—within a holistic context. Merleau-Ponty argues that this background element, wherein things only stand out through their relations with other things in a field, is how all human thought works.
Elsewhere, Merleau-Ponty asserts his anti-representationalist account of cognition when he outlines the shortcomings of objective thought and mechanistic physiology’s computationalist theories about human perception. Such models, he argues, fail to recognize the active role of the body’s intentionality, its perpetual directedness towards something and its disposition to excitation. Instead, he adopts a ‘psychophysical’ view to assess how the psychical determinants and physiological conditions make up cognition.
Skill acquisition as embodied coping: What Computers Can't Do
Dreyfus adopted Merleau-Ponty’s theoretical framework and claimed that many abilities do not require mental representations. He employed Merleau-Ponty’s phenomenology of skillful behavior as the opposite of what he calls the ‘Myth of the Mental’: the myth that philosophers fall into when construing human experience and intelligence as marked by thinking and reasoning, leading them to ignore the embodied activities that are a necessary layer of human intelligence.
In expanding and defending Merleau-Ponty’s complex claim—that cognition is embedded in a general context of practical activity, or ‘embodied coping’, rather than mental acts—Dreyfus puts forward a five-point learning model of skill acquisition and uses the example of learning to drive. The learning model, considered in stages from novice to expert, shows the non-representational, non-cognitive aspect of our practical dealings in the world: of how skills are acquired by dealing with things and situations, which, in turn, determine how things and situations show up for us ‘as requiring our responses’ like solicitations. The driving example shows that reaching the level of expertise happens in an atheoretical way in which ‘intuitive behavior replaces reasoned responses.’ Dreyfus sees expert knowledge as embodied, immediate intuition where learning takes on a new form.
Dreyfus transported those ideas to the world of AI and argued that, since AI is not embodied and human knowledge necessarily is, AI will never fully reach human-level cognition and will always be restricted in its application (as in, not reach full AGI).
This, he argues, is because human intelligence cannot be modeled or reduced to brain functions—that the brain and mind are not analogous to a computer hardware and its software. This conclusion of his, which is put here rather succinctly, aimed at offsetting the ‘inflated claims’ and ‘unwarranted optimism’ about AI’s future.
Being, Love…and the AI body
We have therefore established that the body cannot be screened off from the neural substrates of consciousness. However, all kinds of reflection about the role of a robotic body instantaneously whittle away to one question: is it a male or female body—and does that matter? Would it have been more credible (and obviously more aesthetically pleasing) if Scarlett Johansson had appeared – in full – in the movie?
Yes, because not only are we embodied beings, but we are also sexed, relational beings. Theodore is not falling in love with a vacuum; he is drawn to an image of a ‘woman’ with whom he converses and eventually desires.
As much as a humanoid robot’s actuators, sensors, specific organ formations, nervous system, dynamics and navigation are important notions to delve into, I argue that a look at the sex and/or gender of AI can bring phenomenology, enactivism and artificial intelligence into a meshed, reflective equilibrium and elevate their practice.
Once we understand, based on Merleau-Ponty’s phenomenology, that existence is the site of experience, it becomes virtually impossible to understand our dynamic interplay with worldhood, ontically as well as ontologically, by abstracting our sexed being from our experience. If we bring the intentionality of the body to the forefront, as Merleau-Ponty suggests, being is always being-towards. Our sexuality is a major driver of our lived-being—since human sexuality is an expression of one’s way of being-towards the world, time and other men.
Similarly, in Being and Time (1927), Heidegger posits the Mit-Sein (being-with) as the constitutive structure of Dasein, of being-in-the-world. Presupposed by, but also discovered through, our everyday dealing with equipment, we find that Dasein is always Mit-Sein, that the world is made or shared by The Other. As intentional and relational beings, the Other relates to our being and cannot be divorced from it. As Ong-Van-Cung puts it, “if the body is ‘the hidden form of being oneself,’ [….] sexuality is the ambiguous atmosphere coextensive with life, due to which man has a history.”
This instrumental, formative part of our cognition was clearly considered by the creators of Her. In the movie, Theodore and Samantha sense that physical intimacy is missing and suggest using what they call a “sex surrogate”: someone who would ‘simulate’ Samantha’s presence. Of course, this doesn’t work and Theodore quickly ends it, but the point stands: Samantha needed a body.
Admittedly, to do without sex and gender in AI seems an enticing proposal that divorces itself from mankind’s most troubling conceptions. Yet, if the goal is to develop AI that is similar to us—AI that has a higher-level, “human” intelligence and experience of the world (a Dasein)—then it has to be embodied, sexed, and embedded in our environment. Otherwise, attempts to create truly intelligent AI will remain futile and inherently limited.
Let’s get back to Theodore. Are his feelings real? Is it possible to love someone who’s not really…real? I suspect that ‘artificial love’ will take on a whole new form in the metaverse era - we can revisit the question of what is real then!
References:
Dreyfus, H. 1972. What computers can't do: a critique of artificial reason. New York: Harper & Row.
Dreyfus, H. 2005. "Overcoming the Myth of the Mental: How Philosophers Can Profit from the Phenomenology of Everyday Expertise." Proceedings and Addresses of the American Philosophical Association 79, no. 2 (2005): 47-65.
Dreyfus, H. 2002. “Intelligence without representation – Merleau-Ponty's critique of mental representation. The relevance of phenomenology to scientific explanation.” Phenomenology and the Cognitive Sciences 1, 367–383.
Fjelland, R. Why general artificial intelligence will not be realized. Humanit Soc Sci Commun 7, 10 (2020). https://doi.org/10.1057/s41599-020-0494-4
Heidegger, M., Macquarrie, J., & Robinson, E. (1962). Being and time. Malden, MA: Blackwell.
Merleau-Ponty, M. (1945) 2002. Phenomenology of Perception, trans. Colin Smith. London: Routledge.
Varela FJ, Thompson E, Rosch E. 1991. The embodied mind. MIT Press, Cambridge.
Comments