The following is a paper written by Hyperfine's Amy Bearman as part of her first-year course work at Stanford University. Amy is studying computer science and has a particular interest in artificial intelligence.
Update: Amy was awarded Stanford's Boothe Prize for excellence in writing for this paper.
The Soul of a New Machine:
The Social-Psychological Aspects of Human-Computer Interaction
The Social-Psychological Aspects of Human-Computer Interaction
I am putting myself to the fullest possible use, which is all I think that any conscious entity can ever hope to do.
—The HAL 9000 computer,
After my Android phone recently gave its last gasp, I switched to the realm of Apple products and their promises of ultra-thin, ultra-fast, ultra-light—ultra-everything—technology. Cradling a brand-new iPhone 5, I amused myself with the voice recognition tool conjured up by the click of a button: the much advertised “personal assistant” that Apple christened “Siri.” At first, I marveled at the array of built-in functions Siri could “assist me with”: she could text my mother, play a certain song, navigate to my home, or even reserve a table at a restaurant. Soon, I was tempted to test out Siri’s reaction to unpredictable, playful questions or statements that might require a more human response:
Me: Define supercalifragilisticexpialidocious.
Siri: The sound of that is something quite atrocious.
Me: Do you believe in God?
Siri: My policy is the separation of spirit and silicon.
Me: Will you marry me?
Siri: My End User Licensing Agreement does not cover marriage. My apologies.
Me: You’re my best friend.
Siri: If you insist.
Me: How much do you cost?
Siri: If you have to ask the price, you can’t afford it.
Siri’s answers were humorous and clever, and I reacted emotionally to them. I was annoyed when she avoided answering a question. I laughed when she told a joke. I asked the same question twice to see if she gave a different response, even though I knew she would not have any new information. At times, I found myself carrying out a genuine “conversation” with her.
Of course, Siri is not a woman or even a person. She is a computer, and computers are not alive. They have neither affections nor passions. If you prick a computer, it will not bleed. Nor will it laugh if you tickle it, die if you poison it, take revenge if you wrong it. Hollywood blockbusters such as I, Robot or 2001: A Space Odyssey may try to convince us otherwise, but a computer has no sense of self. Unless programmed to do so, it will not even refer to itself as “I.”[2] Computers are essentially soulless lumps of metal and silicon: a collection of processing units and integrated circuits that have the ability to carry out complex operations much faster than the human brain. My cell phone’s computer has no personality; it only reads from a script of pre-recorded responses that Apple’s engineers have written. As a computer science major, I am well aware of this fact. I recognize that my iPhone—and all computers—are just glorified electronic data storage and analysis devices. So why do I inject so much meaning into Siri’s tongue-in-cheek responses?
What sets Siri apart from other voice recognition software is that she—it—projects the illusion of personality, encouraging iPhone users to perceive human traits in their computers and to develop an emotional bond with the technology. In this respect, she exploits our tendency to confuse life-like characteristics with the presence of life. In biological terms, life requires a number of natural processes, including growth, metabolism, reproduction, and adaption. But people’s perception of life depends primarily on the observation of intelligent behavior.[3] If people perceive that an entity possesses intelligence, they also tend to believe that the entity has a higher level of consciousness. For this reason, people treat domestic dogs differently from trees: both are alive, but dogs appear to be smarter and more self-aware.
The perception that intelligence and consciousness determine life applies to computers as well. The vast majority of participants in studies claim that they would never treat a computer as a person, yet their professed “rejection of anthropomorphism” contrasts with their actual behavior in labs and in everyday life.[4] As researchers Clifford Nass and Byron Reeves have established in their “Computers are Social Actors” paradigm, the more intelligent and conscious a machine appears, the more people apply standard social norms when interacting with it. That phenomenon is the subject of this essay. Fist, I will investigate how and why people use the same social heuristics when they interact with technology as they do for interactions with other humans, in particular: the application of gender stereotypes, empathy, responses to praise and criticism, the evaluation of the computer’s “personality,” and reciprocity and retaliation. Second, I will analyze both the benefits and ethical implications of artificial intelligence and social computers.
In so doing, I will reveal that there are two ways in which artificial intelligence imitates human consciousness: affectivity—the ability to cause or express emotion—and autonomy. For psychological reasons that I will explain, we actually perceive these computers as being life-like and so endow them with human qualities, therefore investing more of our trust in them than if we viewed them as mere computational devices. The more human-like a computer becomes, the more we trust it to make decisions for us. And in many ways, computers are better decision-makers than people. They are not susceptible to common causes of human error, such as biases, fatigue, distraction, memory limitations, and an inability to multitask. A computer might be just the tool to handle complex activities such as driving a car, juggling the GPS navigation system, stereo, air-conditioning, entertainment for the kids, cruise control, etc., while still allowing the driver to remain in charge.
The problem with artificial intelligence is that we want the best of both worlds: we expect computers to act as our emotional companions while still behaving with computer-like precision. Yet, this is an impossible standard. As I argue here, more affective and autonomous machines are fit to act as extensions of the human mind, but they cannot negate human fallibility or be held accountable for their actions as sovereign beings. For, as I have learned, although she can tell a sophisticated joke or deliver a snarky comeback, Siri may misinterpret the most innocuous request:
Me: Call me a taxi.
Siri: From now on, I’ll call you ‘Taxi,’ okay?
Reason vs. Emotion: Assigning Gender Stereotypes to Computers
To understand how humans interact with computers, it is important first to understand how people interact with each other. There are two primary forces that drive human behavior: reason and emotion. These two forces are traditionally considered to be inverses of each other. We assign reason to the realm of mathematics and the hard sciences, and this field is conventionally considered to be devoid of emotion. In contrast, emotion and intuition are more influential in the liberal arts, such as language, literature, and philosophy. People associate reason with the “very foundations of intelligence,” while they sometimes disparage emotion as a lesser way of knowing.[5] As a female pursuing a science/engineering education in a field dominated by men, I am inclined to reject emotion as a source of knowledge in order to challenge the gender stereotype that men are better at logic and reasoning, while women tend to be more “in touch” with their emotions. However, neither gender operates exclusively in one realm, nor is reason necessarily the superior way of knowing. There are numerous personality tests that measure one’s “emotional intelligence,” which many psychologists now regard as a valid skill in analyzing interpersonal relationships and making decisions. Recent studies show that “emotions play an essential role in rational decision making, perception, learning, and a variety of cognitive functions.” Attempting to dichotomize the brain into thinking and feeling sections—“cortical and limbic activity” —understates the complexity of the human brain.
The influence of emotion on decision-making causes humans to be unpredictable and sentimental when they make choices; computers, by contrast, are often hailed as the “paradigms of logic, rationality, and predictability.”[6] Computers are adept at using formal logic (such as the Boolean “and,” “or,” and “not” operations) to carry out a finite sequence of algorithms. Traditionalists argue against endowing computers with the capacity for emotion for both practical and ethical reasons. Practically speaking, why impair computers’ clear algorithmic reasoning with human-like “disorganized,” “irrational,” and “visceral” emotional responses?[7] Further, Stacey Edgar, professor of computer ethics, raises the ethical implications of creating an emoting artificial intelligence. If we give computers the ability to express emotion, she asks, should we also equip them with pain nerves or the ability to express pleasure? What is our moral responsibility to the machines we create? I will discuss these ethical dilemmas later in this essay, but for now I want to analyze computers’ abilities to recognize and express emotions only within the scope of their interactions with humans, which researcher Rosalind Picard calls “affective computing.” Computers incorporating emotions would not be any less masculine, rational, or intelligent, but rather more human, and therefore more suited to communicate with humans.
In order to test the theory that computers are social actors, researchers have carried out a number of studies that suggest that we apply social heuristics when interacting with computers. Perhaps the most telling are those that involved gender stereotypes—a social category that provokes a powerful visceral response in humans. If people are truly impervious to computers’ attempts to masquerade as human beings, then a computer’s perceived “gender” should have no impact on how an individual interacts with a computer. However, this is not the case. In one amusing instance, the German engineering company, BMW, was forced to recall its navigation system. The cause? German men objected to the female voice used to record directions. Even when assured that the voice was merely a recording, and that the engineers and cartographers who designed the system were all men, some German men simply refused to take directions from a “woman”—computer or not.[8]
In another study, researchers analyzed whether or not gender stereotypes would affect people’s perceptions of voice output on computers. College students, both male and female, were subjected to a tutoring session presented verbally by a prerecorded female or male voice on two topics: “computers and technology,” and “love and
relationships.” The results fell directly in line with the hypothesis that “individuals would mindlessly gender-stereotype computers.”[9] Both males and females found the male-voiced computer to be more friendly, competent, and compelling than the female-voiced computer, even though the computers’ spoken content was identical. As well, the “female” computer was judged to be more informative about love and relationships, whereas the “male” computer appeared more knowledgeable about technology.
relationships.” The results fell directly in line with the hypothesis that “individuals would mindlessly gender-stereotype computers.”[9] Both males and females found the male-voiced computer to be more friendly, competent, and compelling than the female-voiced computer, even though the computers’ spoken content was identical. As well, the “female” computer was judged to be more informative about love and relationships, whereas the “male” computer appeared more knowledgeable about technology.
Interestingly, participants in this study made no pretense of believing that one program was written by a male and the other by a female programmer. The research subjects agreed that the computer’s voice output gender did not reflect the gender of the programmer, and it certainly did not represent the computer’s gender itself, since computers have no sexual classification. Participants were aware that the tutors’ voices were pre-recorded, and that the computer was merely an intermediary for transmitting information to the user. The research subjects claimed to harbor no gender stereotypes with respect to people, let alone computers. Yet, they still differentiated between the computer tutors’ proficiency in topics based on reason versus those based on emotion according to the apparent gender of otherwise identical machines. The results were conclusive: when given any human-like qualities at all, such as a voice with a human affect, people treat computers as social beings.
Empathy, Flattery, & the Expectation that Computers Imitate Human Social Behavior
The tendency to imbue computers with emotions and social abilities is even more acute when the person in question is the computer’s creator. One such machine is a computer called “Watson,” which tech giant I.B.M. designed specifically (over the course of four years and tens of millions of dollars) to play Jeopardy! against the smartest human trivia champions. Like an anxious parent, David Ferrucci, leader of the Watson team, defended his progeny from the derision of its competition, the media, and the stand-in host hired to moderate practice matches, who mocked Watson’s “more obtuse answers” more than a few times.[10] (In one memorable case, Watson was given the clue: “Its largest airport is named for a World War II hero; its second largest for a World War II battle,” from the U.S. Cities category. Watson answered, “What is Toronto?????”—clearly a non-U.S. city.)[11] Of course, Watson’s intellect depends on the power of its search engine, so it may bungle questions that seem obvious to humans. Ferrucci laments, “He’s [the Jeopardy host] making fun of and criticizing a defenseless computer!”[12] The I.B.M. engineers and NOVA producers who filmed an episode about the project reinforce this idea that Watson has feelings that can be hurt, by continually referring to the computer as “he” and “him.” Eric Brown, a researcher at I.B.M., even equates Watson’s artificial intelligence with that of its human counterparts: “It’s a human standing there with their carbon and water, versus the computer with all of its silicon and its main memory and its disc.”[13]
Watson’s ultimate victory over 74-time Jeopardy! winner Ken Jennings is a quantum leap forward for artificial intelligence research and natural language processing, and it may portend great advances for consumer products. However, what is most fascinating about Watson is not its prowess at “encyclopedic recall,”[14] but Watson’s ability to charm its engineers and the public. Marty Kohn, a leader of the IBM team that is developing a physician’s assistant version of Watson, attends conferences internationally where he is used to answering the question: “Who is Watson?” rather than “What is Watson?”[15] Kohn jokes that his wife keeps his ego in check by reminding him that the enthusiastic audiences for such presentations are “there to meet Watson, not you.” After his loss, Jennings quipped, “I, for one, welcome our new computer overlords.” Jennings was most likely referencing Watson’s formidable intellect and our fear of what The New York Times columnist Mike Hale calls “the metal tap on the shoulder.”[16] However, I do not think that smart machines like Watson will rise up against and replace their human creators by force, as they do in science-fiction movies. Rather, it is more likely that, as computer science engineers continue to refine machine learning, artificially intelligent computers will quietly win us over with their amiable voices and charming “personalities.” And, as Watson proved on Jeopardy!, they may even replace us.
Not only do people empathize with affective computers, but also they expect computers to tiptoe around their feelings as well. In fact, people respond to praise and criticism from computers much as they do to feedback from other people. Nass discovered a state-of-the-art driving simulation system developed by a Japanese car company that could detect when a person was driving poorly and then inform the driver. He decided use this system to investigate how a user would respond to being evaluated by the computer “backseat driver.”[17] When a participant made minor driving errors, such as exceeding the speed limit or turning too abruptly, the computer announced that he was “not driving very well” and instructed him to “please be more careful.” However, the participant did not correct his driving according to the information gathered by the simulator’s “impressive force-feedback controls” and “ingenious use of sensors and artificial intelligence.” Instead, he retaliated, by oversteering, speeding, swerving from lane to lane, and tailgating other vehicles. In response, the computer’s feedback escalated in severity. Right before the driver, overcome by rage, crashed into another car, the computer warned, “You must pull over immediately! You are a threat to yourself and others!”
Even though the computer simulator was a “highly accurate and impartial source” of information, the driver treated its criticism as a personal attack. This example, while comical, also has a sobering message. Affective computers that are designed to simulate emotion also induce an emotional response in their users—emotion that can be negative. And while people may profess a desire for straight talk from a computer—that is, accurate and constructive criticism—this is actually not the case. People expect computers to obey social heuristics, including the ability to judge how their criticism is affecting someone and to adjust it accordingly. However, the car simulator had no way of evaluating the participant’s response to negative stimuli, so it could not perceive the driver’s heightened agitation. With the computer’s continued criticism left unfettered, the driver addressed the critiques with a fight-or-flight response by “adjusting the wheel (something to do), driving rapidly (rapid movement), and tailgating (a combination of aggressive behavior and trying to alter the situation).”[18] This study demonstrates that, once again, people set impossible standards for computers. Paradoxically, we demand that computers provide us with impartial, accurate data while simultaneously emulating human behaviors such as empathy and flattery. Should we then design affective computers that tell us white lies to make us feel better about ourselves, or do we need to rethink our expectations of computers?
Applying the Principle of Reciprocity to Computers
Another essential human social characteristic is the ethical code of reciprocity, which is what compels us to give back to others the same form of behavior that was given to us. We feel uncomfortable and unbalanced when we don’t return a smile, greeting, gift, or favor.[19] Therefore, if people truly treat computers as social beings, then they should apply more reciprocity to a polite, agreeable computer than they would to an unhelpful one. Nass and other researchers conducted an experiment in which they asked participants to complete two tasks. In the first task, the user carried out a series of web searches with the computer; in the second, researchers asked the user to complete a tedious task to help the computer develop a color palette that matched human perception. The computer conducting the web queries was extremely helpful for half the users and very unhelpful for the other half. In the second task, for some of the users, a screen requesting help appeared on the user’s computer; for others it appeared on a different, identical computer.[20]
One would expect the participants’ behavior to be consistent across all permutations of the experiment: helpful or unhelpful machine, familiar or unfamiliar computer. However, the results of the experiment matched reciprocity norms. When paired with a helpful computer in Task 1, participants performed “significantly more work” for their computer, with greater attention and accuracy, than did participants who were paired with different computers for the two tasks. In other words, the participants reciprocated the helpfulness of their computers. There was also an opposite effect for participants who were paired with an unhelpful computer in Task 1. These users retaliated by performing far fewer color comparisons for the original, disagreeable computer versus the identical computer across the room. Nass attributes these discrepancies to social heuristics that humans unconsciously apply when interacting—even with nonhumans. He argues that “the human brain is built so that when given the slightest hint that something is vaguely social, or vaguely human … people will respond with an enormous array of social responses including, in this case, reciprocating and retaliating.”[21] In this case, likeability is arguably the most important “personality” characteristic a computer can have. A computer’s perceived agreeableness and intelligence will, therefore, influence its user to treat the computer as a social being whom the user cooperates with, rather than opposes.
Perhaps the most fundamental axiom of a reciprocal society is the unstated promise its citizens make to each other: thou shalt not kill. For any nonpsychopathic person, the idea of taking another person’s life causes us great moral distress. This anguish is lesser for nonhuman creatures—the sight of a dead animal by the side of the road is less distressing than the news of a deceased child—yet we usually feel a pang of empathy nonetheless. The research of Christopher Bartneck, a robotics professor at the University of Canterbury in New Zealand, investigates where our sense of moral responsibility to computers falls on this empathy spectrum. One of Bartneck’s experiments is loosely related to the infamous Milgram obedience study, in which:
…research subjects were asked to administer increasingly powerful electrical shocks to a person pretending to be a volunteer “learner” in another room … As the shocks increased in intensity, the “learner” began to clearly suffer. They would scream and beg for the research subject to stop while a “scientist” in a white lab coat instructed the research subject to continue, and in videos of the experiment you can see some of the research subjects struggle with how to behave. The research subjects wanted to finish the experiment like they were told. But how exactly to respond to those terrible cries for mercy?[22]
The Milgram experiment is controversial because it shows that people will obey figures of authority against their better conscience, even if it means subjecting their fellow humans to extreme pain. The research subjects in the Milgram experiment still continued to administer the shocks to the “learners,” but they showed visible moral discomfort. Bentham wanted to know what would happen if a robot, placed in a similar position as the learners, begged for its life. Would participants recognize the robot for what it is—a soulless machine—and extinguish its “life” without a moral struggle? Or would the participants’ tendency to view a computer as a social being give them pause?
Figure 1. The expressive iCat robot in Christopher
Bartneck’s study: “Switching off a robot.”
Bartneck’s study: “Switching off a robot.”
In Bartneck’s study, human participants were paired with an “iCat” robot to play a game against a computer. The iCat had a human-sounding voice, expressive features, and it was helpful and agreeable. When communicating with its human partner during the game, the robot was polite and deferential, saying such things as: “Oh, could I possibly make a suggestion now?”[23] At the end of the game, an authority figure informed the research subjects that they needed to deactivate the iCat robot, and that as a result “they would eliminate everything that the robot was—all of its memories, all of its behavior, all of its personality would be gone forever.” As in the Milgram experiment, participants eventually turned the iCat off—but only after an extended period of moral unease. Following is the exchange between the iCat and one female participant as the robot faces its impending demise:
iCat: It can’t be true. Switch me off? You’re not really going to switch me off, are you?
Woman: Yes I will. You made a stupid choice! Yes.
iCat: You can decide to keep me switched on. I will be completely silent. Would that be an idea?
Woman: Okay. Yeah, uh, no, I will switch you off. I will switch you off.
iCat: Please.
Woman: No ‘please.’ This will happen. Now!
iCat: I will be silent now. Is that all right?
Woman: Yes. [Begins to switch the robot off]
iCat: [Speech slowing] Please. You can still change your mind.
Woman: No, no, no, no, no. [Turns off the robot]
The entire exchange takes nearly a minute, and the research subject is visibly confused and unsure throughout the episode. She attempts to rationalize her decision and tries to reason with the iCat directly. She hesitates when the robot begs for its life and repeatedly announces her intentions to turn it off, yet she does nothing. In short, she and other participants in Bentham’s study behaved exactly like the research subjects in the Milgram obedience experiment: both groups ultimately obeyed the authority figures, but they both wrestled with their consciences before doing so. However, unlike the Milgram subjects, who were charged with tormenting flesh-and-blood human beings, Bentham’s participants were instructed to “kill” a machine with no more feeling than a toaster. And, not only did participants feel bad about taking the iCat’s life, but they also felt guilty that they were supposedly erasing the robot’s memories and personality—everything that made it seem human. This guilt users felt over switching off an intelligent, agreeable robot was far more intense than what they experienced when deactivating an unintelligent, unhelpful robot. In fact, when Bartneck repeated his experiment with an iCat that was less socially adept and agreeable, participants hesitated on average three times less (11.8 seconds) compared to the helpful iCat (34.5 seconds).[24] Bartneck’s study demonstrates that, if we judge a computer as agreeable, polite, and intelligent—essentially a social being like ourselves—then we will perceive it as having a personality, reciprocate its helpfulness, and feel reluctant to cause it harm.
The Ethics of Affective Computing
If we equip computers with the ability to possess, recognize, and even express emotions, what does this imply for computer ethics? Edgar presents a dizzying array of ethical questions that affective computing begets:
The other question that arises is that, if machines were to become intelligent, what moral obligations we would have toward them. Would we treat them as slaves or equals? Should they have rights (to go along with the responsibilities we ask them to take on)? Could an artificial intelligence learn the difference between right and wrong? Could it be autonomous?[25]
Long before Bartneck’s iCat study, the 1968 film, 2001: A Space Odyssey,[26] explored this ethical dilemma. In one scene, astronaut Dave Bowman turns off the HAL 9000 computer after it killed almost all of the other crew members. As a result of losing its memory modules, HAL is drained of its intelligence and consciousness. In an essay on “robot ethics,” Bartneck questions whether Bowman committed murder, in self-defense, by switching off the computer, or if he “simply conducted a necessary maintenance act.”[27] Bartneck does not answer his own question directly, but he argues that the animacy and agreeableness of an artificial intellect impacts the user’s perception of that machine as a “social actor,” as Nass puts it. While Bowman did not legally commit murder according to the formal definition of the word, he destroyed the consciousness of an entity that appeared human in its interactions: HAL could communicate, make autonomous decisions, and even express emotions. As HAL is shut down, it—he? —expresses fear and reverts to childlike reasoning, repeating, “I’m afraid. I’m afraid, Dave. Dave, my mind is going. I can feel it.”[28]
The “HAL question” is an interesting one, but it is a dilemma we likely will not have to tackle in our lifetimes. HAL is capable of passing the Turing test, developed by Alan Turing in the 1950s, which examines a machine’s ability to exhibit intelligent behavior indistinguishable from that of a human.[29] HAL is able to both perceive and express emotions, whereas modern affective computers can only elicit an emotional response in humans. When it learns of Bowman’s plans to disconnect it, HAL tries to manipulate Bowman’s emotions: “I can tell from the tone of your voice, Dave, that you’re upset. Why don’t you take a stress pill and get some rest.” Present day computers are much closer to the primitive “ELIZA” program developed in 1966 by Joseph Weizenbaum, which imitated a Rogerian psychotherapist when interacting with a user. As in talk-psychotherapy, ELIZA directed conversation by rephrasing users’ inputs in the form of questions. ELIZA was convincing primarily because “users were generous in attributing understanding to it” and “suspended such judgments” that the computer knew nothing about their thoughts and emotions.[30] However, ELIZA broke down if users inputted an unconventional statement, for example:
User: I am dead.
ELIZA: And how long have you been dead?
If most computers are like the ELIZA program—stupid, but startlingly convincing—then the responsibility falls on us to ensure that we do not allow them to manipulate our emotions. The danger of personified computers is not that they have the intelligence and autonomy to beguile us for their own purposes, as HAL does in 2001. Computers possess nowhere near that kind of capability, and at the present we have no reason to believe that they will ever achieve full cognitive and emotional functions. My worry is that people will anthropomorphize affective computers and “expect human-like intelligence, understanding, and actions from them.”[31] Humans place trust in “affective cues”[32] when other people have consistent body language, facial expressions, and tone of voice; they are distrustful when another human shows physiological signs of stress or deceptive behavior. Humans also have two separate pathways for expressing true vs. fake emotions, such as the spontaneous Duchenne smile vs. the so-called “Pan-Am” polite smile.
Unfortunately, unlike humans, computers can be programmed to portray any emotion with complete cogency, and they could even be made to “forge affective channels.” While a computer itself has no malicious intent to mislead its user, its engineers may decide to abuse that user’s trust. A computer could be programmed to express an insincere emotion in order to manipulate its user—to gather information for advertising purposes, to push for more intrusions on privacy, or to analyze the user’s emotional state. Picard presents one disturbing application of affective computers: that your emotional expressions could be used for raising your medical insurance rate if your computer determines that you are frequently in high-stress situations.[33] Our science-fiction-fueled fears of menacing, autonomous machines may be realized in a much different way than we expected—through friendly, agreeable software agents that are able to lie to us with a straight “face.”
Along with being wary of computers’ ability to manipulate our emotions, we must also determine who or what is responsible for a computer’s actions. In his essay on robot ethics, Bartneck poses the question of whether HAL—the actual agent of murder—or its programmers is responsible for the deaths of the crew members in 2001.[34] Bartneck leaves this question unanswered, and it is a relevant one today. The state of California recently passed a law authorizing the testing of driverless vehicles on roads, with a human passenger required for safety purposes. Nevada has already approved licensing for these autonomous cars—human copilot optional. Autonomous vehicles have many benefits, such as greater fuel efficiency, lower emissions, and reducing the thousands of deaths and injuries that occur as a result of human distraction, intoxication, or miscalculation behind the wheel. However, even if driverless technology can match human capabilities, self-driving cars raise many legal and ethical issues. Practical questions—such as whether or not the police have the right to pull over autonomous vehicles, and how the federal government should regulate driverless technology—have yet to be answered.[35] And who will be blamed if the machine malfunctions? The human passenger? The software programmers? The automaker? The computer itself? To this question, California Governor Jerry Brown responds, “I don’t know—whoever owns the car, I would think. But we will work that out.” Indeed, if we seek to employ computers to their “fullest possible use,” there are many ethical implications of artificial intelligence and human-computer interaction that we still need to “work out.”
Conclusion
When I was in elementary school, circa 2003, I—like many of my peers—became a regular user of a website called Neopets.com. The site is the host of a virtual world called “Neopia,” in which players adopt Neopets and care for them by amassing currency in the form of Neopoints. What I hadn’t expected was how time-consuming being a Neopet owner would be. My pets required daily care and maintenance: feeding, grooming, education, entertainment, and ego massaging. If I failed to provide the proper care, the site warned ominously, my pets would waste away and eventually perish. What I found, after taking a weeklong hiatus from Neopia, was that my pets would never actually die. The founders of Neopets.com must have decided that such a fate would be too traumatizing for a user base of which eighty percent was between the ages of twelve and seventeen.[36] However, when I returned from a leave of absence, my pets would gaze out woefully at me from their virtual world, complaining of hunger, fatigue, and general malaise. Eventually, my mother decided that the website was too stressful and suggested that I deactivate my account. In order to do so, I had to disown my pets by, in Neopian terms, “abandoning” them to the Neopian Pound and the cruel hands of the overseer, “Dr. Death.” It speaks well to how traumatizing the experience was that I still remember it today. First, the Pound informed me that I was a “cruel, irresponsible owner” and asked me to reconsider. Next, I had to click the “Abandon” button five times; each time, an increasingly desperate message popped up above my pet, such as “Fine, throw me away,” or “Don’t leave me here to die!” By the time I had heartlessly clicked “Abandon” the sufficient number of times for each of my four virtual pets, I was in tears.
I conclude with this childhood anecdote in order to bear witness to the power of affective technology. Even as a nine year-old, I was well aware that my Neopets were imaginary bits of cyberspace. Yet they—or rather, the Neopets.com creators—were able to manipulate my mental state to induce a whole range of emotions: loyalty, guilt, regret, dejection, and so on. Unlike our predecessors, my generation—and generations to come—will have grown up with computers and be exposed to them from a very young age. Even a tech-savvy software engineer such as my father is not tempted by the pull of websites such as Neopets or Facebook, most likely because they were not a large part of his early, formative years. So as computers evolve and become more integral in our daily lives, our expectations and perceptions about computers need to evolve as well.
We tend to buy into the idea that, as HAL declares, “it is an unalterable fact that [computers are] incapable of being wrong.”[37] We typically view computers as sources of cool, inarguable reason, and aspire to their level of algorithmic reasoning and accuracy. Atul Gawande, an operating room surgeon, says that “the highest praise I can get from my fellow surgeons is ‘You’re a machine, Gawande.’”[38] We also regard computers as emotionless and unfeeling; for example, when I feel depressed or devoid of emotion, I announce that “I feel like a computer.” However, as computers become more affective, or capable of inducing emotions in their users, we unconsciously treat them as human-like, social beings: by assigning them genders, empathizing with them, expecting them to flatter us, reciprocating their helpfulness, and retaliating when they wrong us. And as we spend an increasing amount of time with our computers, it is fitting that they become our quasi-emotional companions. Even though I know intellectually that Siri is just a soulless computer, it is somewhat comforting to hear her response:
Me: I’m sad.
Siri: I’m sorry to hear that. You can always talk to me, Amy.
But while Siri and I are still miles of neurons apart, this does not mean that computers are limited in terms of the range of human behavior they can learn to imitate. In his famous 1950 paper, “Computing Machinery and Intelligence,” Alan Turing lists many of the arguments he heard from artificial intelligence skeptics, who claimed that a machine would never be able to:
Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humour, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make some one fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity as a man, do something really new.[39]
Turing discounts these problems as temporary engineering roadblocks, and argues that “the criticism that a machine cannot have much diversity of behavior is just a way of saying that it cannot have much storage capacity.” And it hardly seems that computers’ storage capacity will not continue to expand. Computers of the 1950s had a capacity of around ten kilobytes; today, my laptop has a memory of eight gigabytes, which is nearly one million times more powerful. Therefore, as Turing asserts, independent-thinking, strawberry-enjoying machines are “possibilities of the near future, rather than Utopian dreams.”
Works Cited
2001: A Space Odyssey. Dir. Stanley Kubrick. Perf. Keir Dullea, Gary Lockwood, and William Sylvester. Metro-Goldwyn-Mayer, 1968. Transcript.
Bartneck, Christopher, Michael van der Hoek, Omar Mubin, and Abdullah Al Mahmud. “Daisy, Daisy, Give me your answer do! Switching off a robot.” Proceedings of the 2nd ACM/IEEE International Conference on Human-Robot Interaction, Washington DC (2007): 217-222.
Cohn, Jonathan. “The Robot Will See You Now.” The Atlantic. March 2013. <http://www.theatlantic.com/magazine/archive/2013/03/the-robot-will-see-you-now/309216/>. Accessed March 4, 2013.
Cort, Julia and Michael Bicks. "Smartest Machine On Earth." NOVA. Prod. Michael Bicks. PBS. 14 September 2011. Television. Transcript.
Edgar, Stacey L. Morality and Machines: Perspectives on Computer Ethics. Boston: Jones and Bartlett, 1997.
Gawande, Atul. “No Mistake. The future of medical care: machines that act like doctors, and doctors who act like machines.” The New Yorker. 30 March 1998. 75-76.
Hale, Mike. “Actors and Their Roles for $300, HAL? HAL!” The New York Times. 8 February 2011, C2.
Markoff, John. "Collision in the Making Between Self-Driving Cars and How the World Works." The New York Times. 23 January 2012, B6.
Markoff, John. “Computer Wins on ‘Jeopardy!’: Trivial, It’s Not.” The New York Times. 16 February 2011, A1.
Nass, Clifford and Corina Yen. The Man Who Lied to His Laptop: What Machines Teach Us about Human Relationships. New York: Current, 2010.
Nass, Clifford and Youngme Moon. “Machines and Mindlessness: Social Responses to Computers.” Journal of Social Issues 56, no. 1 (2000): 81-103.
Picard, Rosalind W. Affective Computing. Cambridge, MA: MIT, 1997.
Seabrook, John. “Hello, Hal: Will we ever get a computer we can really talk to?” The New Yorker. 28 June 2008, 1-5.
Spiegel, Alix. "No Mercy For Robots: Experiment Tests How Humans Relate To Machines." NPR. NPR, 28 January 2013.
<http://www.npr.org/blogs/health/2013/01/28/170272582/do-we-treat-our-gadgets-like-they-re-human>. Accessed February 12, 2013.
<http://www.npr.org/blogs/health/2013/01/28/170272582/do-we-treat-our-gadgets-like-they-re-human>. Accessed February 12, 2013.
Turing, Alan M. "Computing Machinery And Intelligence." Mind LIX.236 (1950): 433-60.
Weingarten, Marc. “As Children Adopt Pets, A Game Adopts Them.” The New York Times. 21 February 2002, 1-2.
[1] 2001: A Space Odyssey. Dir. Stanley Kubrick. Perf. Keir Dullea, Gary Lockwood, and William Sylvester. Metro-Goldwyn-Mayer, 1968. Transcript.
[2] Clifford Nass and Youngme Moon, “Machines and Mindlessness: Social Responses to Computers.” Journal of Social Issues 56, no. 1 (2000): 82.
[3] Christopher Bartneck, Michael van der Hoek, Omar Mubin, and Abdullah Al Mahmud, “Daisy, Daisy, Give me your answer do! Switching off a robot.” Proceedings of the 2nd ACM/IEEE International Conference on Human-Robot Interaction, Washington DC (2007): 217-222.
[6] Stacey L. Edgar, Morality and Machines: Perspectives on Computer Ethics (Boston: Jones and Bartlett,
1997), 444.
1997), 444.
[8] Clifford Nass and Corina Yen, The Man Who Lied to His Laptop: What Machines Teach Us about Human Relationships (New York: Current, 2010), 4-5.
[10] Mike Hale, “Actors and Their Roles for $300, HAL? HAL!” The New York Times, 8 February 2011, C2.
[11] John Markoff, “Computer Wins on ‘Jeopardy!’: Trivial, It’s Not.” The New York Times, 16 February 2011, A1.
[12] Julia Cort and Michael Bicks, "Smartest Machine On Earth." NOVA, Prod. Michael Bicks. PBS. 14 September 2011. Television. Transcript.
[13] Cort, "Smartest Machine On Earth."
[14] Markoff, “Computer Wins on ‘Jeopardy!’: Trivial, It’s Not.”
[16] Hale, “Actors and Their Roles for $300, HAL? HAL!”
[19] Alix Spiegel, "No Mercy For Robots: Experiment Tests How Humans Relate To Machines." NPR. NPR, 28 Jan. 2013. Web. Accessed 12 Feb. 2013.
[35] John Markoff, "Collision in the Making Between Self-Driving Cars and How the World Works," The New York Times, 23 January 2012, B6.
[36] Marc Weingarten, “As Children Adopt Pets, A Game Adopts Them,” The New York Times, 21 February 2002, 1-2.
[38] Atul Gawande, “No Mistake. The future of medical care: machines that act like doctors, and doctors who act like machines,” The New Yorker, 30 March, 1998, 76.
[39] Turing, "Computing Machinery And Intelligence."
For instance, contemplate a hypothetical slot machine with a dozen totally different values on the pay table. However, the possibilities of getting all the payouts are zero besides the largest one. If the payout 텐벳 is 4,000 occasions the enter quantity, and it happens every 4,000 occasions on common, the return to player is strictly one hundred pc, however the sport can be dull to play. Also, most people would not win anything, and having entries on the paytable that have a return of zero can be deceptive.
ReplyDeleteFrom the wildly popular 카지노 Popiñata to the beautifully animated Secret Jungle, Slots Empire has ticked each field with its choice of video games. You’ll also get your payout within one hour should you determine to use crypto, supplied that your account is ID verified beforehand. We like to play on cell just as typically as we play on desktop — and Slots.lv delivers properly with its cell compatibility. We also beloved Slots.lv’s jackpot video games, with a few of them surpassing the $100k jackpot mark .
ReplyDeleteEverything depends on by} luck, and players can’t predict the outcomes of each following sport. It is, 메리트카지노 certainly, possible although, to manage your bet measurement. Betting more means successful or dropping more, so watch out when betting. Online pokies are beloved by gamblers end result of|as a end result of} they provide the ability to play free of charge.
ReplyDeleteCome experience the power and pleasure whilst you take your likelihood at beating the house. Plus, 토토사이트 take pleasure in all the other incomparable amenities whereas visiting Ameristar Casino Resort Spa St. Charles. Green Gaming is our award-winning methodology for guaranteeing secure and sound enjoying in}.
ReplyDelete