Robot cognition requires machines that both think and feel
For more than two millennia, Western thinkers have separated emotion from cognition – emotion being the poorer sibling of the two. Cognition helps to explain the nature of space-time and sends humans to the Moon. Emotion might save the lioness in the savannah, but it also makes humans act irrationally with disconcerting frequency.
In the quest to create intelligent robots, designers tend to focus on purely rational, cognitive capacities. It’s tempting to disregard emotion entirely, or include only as much as necessary. But without emotion to help determine the personal significance of objects and actions, I doubt that true intelligence can exist – not the kind that beats human opponents at chess or the game of Go, but the sort of smarts that we humans recognise as such. Although we can refer to certain behaviours as either ‘emotional’ or ‘cognitive’, this is really a linguistic short-cut. The two can’t be teased apart.
What counts as sophisticated, intelligent behaviour in the first place? Consider a crew of robots on a mission to Mars. To act intelligently, the robots can’t just scuttle about taking pictures of the environment and collecting dirt and mineral samples. They’d need to be able to figure out how to reach a target destination, and come up with alternative tactics if the most direct path is blocked. If pressed for time, the team of robots would have to know which materials are more important and to be prioritised as part of the expedition.
Part of being intelligent, then, is about the ability to function autonomously in various conditions and environments. Emotion is helpful here because it allows an agent to piece together the most significant kinds of information. For example, emotion can instil a sense of urgency in actions and decisions. Imagine crossing a patch of desert in an unreliable car, during the hottest hours of the day. If the vehicle breaks down, what you need is a quick fix to get you to the next town, not a more permanent solution that might be perfect but could take many hours to complete in the beating sun. In real-world scenarios, a ‘good’ outcome is often all that’s required, but without the external pressure of perceiving a ‘stressful’ situation, an android might take too long trying to find the optimal solution.
Most proposals for emotion in robots involve the addition of a separate ‘emotion module’ – some sort of bolted-on affective architecture that can influence other abilities such as perception and cognition. The idea would be to give the agent access to an enriched set of properties, such as the urgency of an action or the meaning of facial expressions. These properties could help to determine issues such as which visual objects should be processed first, what memories should be recollected, and which decisions will lead to better outcomes.
But research from the behavioural and brain sciences suggests that emotion is not just an ‘added feature’ layered on top of ‘standard’ cognition. Instead, it’s an integral part of our cognitive machinery. In one of the experimentsfrom my lab, people in a scanner watched videos of rapid, flashed-up images of either a house or a skyscraper. They had to identify which of these scenes was present in the video, a task designed to be very difficult. We then introduced an element of emotional manipulation. Before viewing the clips, half of the participants received a mild electric shock while viewing a series of skyscrapers; the other half, instead, watched a series of houses appear, paired with the same mild shock. This is what’s known as classical conditioning, and links an initially neutral stimulus (a nondescript picture) with the emotional meaning of the unpleasant stimulus (the shock).
The outcome: participants conditioned to the skyscrapers were better at detecting them than at detecting houses; conversely, participants conditioned to houses detected those better than they did skyscrapers. And in each case, responses in the visual cortex were stronger for the type of stimulus (house or skyscraper) to which participants had been conditioned. This study shows that perception is not a passive process that merely reflects the external world. Rather, it involves picking up on the significance of objects, and determines how they are processed. Vision is never neutral, it is always laden with affective meaning.
It’s the architecture of the brain, with its short- and long-range connections, that allows these properties to emerge. Emotion doesn’t come about from local computations in a single region, such as the amygdala, which is frequently called the ‘emotion centre’ in the brain. Instead, anatomical studies have revealed that the areas in the brain associated with perception, cognition, emotion, motivation, action and bodily sensations are closely intertwined. Looking at the brain as a complex network helps to clarify why some brain structures, such as the amygdala, are important for emotion: they’re hubs, much like major airports that link to a very large number of destinations. As a consequence, these regions can influence and be influenced by many parts of the brain – which also suggests that it’s not possible to subtract emotion without affecting cognition.
The point is not that emotion is needed for intelligent, autonomous robots – the answer is yes – but that emotion needs to be hooked up to everything that goes on in a cognitive system. Emotion is not an ‘add-on’ module that endows a robot with feelings or allows it to express an internal state, such as the current risk of overheating. Its integration is a design principle of the information-processing architecture. Without emotion, no being that we might create can have any hope of aspiring to true intelligence.
Source: https://aeon.co/ideas/robot-cognition-requires-machines-that-both-think-and-feel