Wednesday, May 15, 2013

MongoDB San Francisco

Reprinted from an email sent to Hyperfine clients regarding Rob's attendance of the MongoDB conference in San Francisco:

I attended the MongoDB conference at the Palace Hotel in San Francisco May 10th of last week. The conference, hosted by 10gen (, was very well attended with roughly 1000 attendees by my estimation. You can visit the conference page at for a list of even sponsors, but I'll mention here that I spoke with representatives from MongoLab, Microsoft Open Technologies, and StrongLoop, among others.

The main takeaways from the conference were that Mongo is growing in popularity, robustness, toolsets, and best practices. While the debate continues to swirl over traditional relational database approaches versus NoSQL in general and Mongo in particular, this feels to me more and more a distraction from making progress on individual projects. Arguments feel like debates over computer science theoretical principles, but just as RSS flourished during its time, and JSON has overwhelmed XML and XML Schema, simplicity rules in the world of rapid, agile development, and Mongo is certainly a strong player in the agile world. The benefits I've suggested in the past regarding JavaScript-everywhere and leveraging that language/object-description ubiquity seemed to me to be a strong theme at the conference.

The keynote address, shared by Eliot Horowitz and Max Schireson of 10gen, covered the growth in functionality and usage of Mongo. Eliot discussed the query optimizer, which has been a part of Mongo since Day 1, and noted that 10gen is in the process of incrementally rearchitecting in order to support upcoming new features, including adaptive query evaluation and plans. The goal is to provide more insights into performance, and better automatic adaptation for improved performance. 10gen has recently released Mongo backup services, which sounds like a huge win and was noted by other non-10gen presenters, and is working on automation tools for database management. Eliot emphasized the use of the new backup services to go beyond only disaster recovery scenarios, including data sampling from production deployments to be used in development and in data analysis. Similar approaches were discussed in breakout sessions regarding the use of Mongo replication.

The day was segmented into a selection of several lectures, each lasting around 40 minutes, with an offering of five per time segment. I chose a sampling from their development and operational tracks. The first was a talk by Charity Majors from Parse, an intermediate-level discussion of managing a maturing MongoDB ecosystem. Charity is in charge of Parse's data and database operations and gave a high-level overview of the sorts of strategies and scripts her team runs to keep Parse running. This was a deeper talk than I anticipated (there were several people in the audience nodding their head in agreement at certain points in the talk, indicating to me that they were probably DBAs managing Mongo systems in their groups), so I tried to get a general sense of the gotcha issues in running a production deployment. I doubt these issues are greatly different than running a production SQL Server or Oracle environment, but the key takeaway is that any production deployment is going to require a dedicated team (one or more persons) who monitor and maintain the database. Charity talked about the strategies she's learned over time, the rough percentage thresholds in performance and capacity to watch for, and the sorts of scripts/tools needed to have in hand prior to banging into a major downtime problem. She reiterated throughout that this is more an art than science, often particular to the sort of application/service being deployed. It's clear that these personnel, once hired and determined to be good, should be nurtured in an organization. It would be painful to lose that institutional knowledge.

As an aside, Charity mentioned Facebook's purchase of Parse. Her demeanor, small joke, and giggle, seemed to imply there was less than overwhelming happiness at Parse regarding the buyout. Interesting.

Jason Zucchetto of 10gen gave a talk on Mongo schema design which I attended next. I was particularly interested in this talk since I've been building data models against MongoDB, using the Mongoose schema/driver layer to model data for a Hyperfine project. One of my concerns has been what are the best practices for defining schema and adapting schema over the product life cycle. Jason's talks confirmed most of my assumptions, the base of which is that under Mongo and NoSQL databases in general, schema design is less of a waterfall approach than agile: Figure out your anticipated usage patterns and define your schema from there. While that doesn't sound radically different from a SQL approach, the difference lies primarily in the use of denormalized schema. Under Mongo, it's important to let go of normalization and to use data duplication strategically in anticipation of the sorts of queries and data presentation the application will rely on. Mongo is very flexible in how queries can span contained documents, and arrays holding one-many or many-many relationships. It always comes down to doing the right thing for the application, but for much of the time, normalized data is the enemy of performance and flexibility. Jason pointed out that for many RDBMS scenarios, populating a web page may take several queries, but a properly defined Mongo schema can return data in one or two gulps. Reflecting back to Eliot's and Charity's talks, Mongo's profiler tools can be of huge help in identifying choke points. Again, this isn't much different from what tools SQL offers; the issue is the approach.

I also attended a talk by Achille Brighton of 10gen covering the basics of replication in MongoDB. Replication allows you to establish a primary database node in a cluster, with one or more secondaries that duplicate the data. A voting algorithm within the cluster identifies the primary, with heartbeats watching for nodes that go down and subsequent elevation of secondaries. The Mongo tools appear to make configuring replication sets easy, with configurations defined in JSON. Replication is used not only for robustness, but can be used to satisfy geographical distribution for fast reads against local servers, and can also be used for data analysis against secondary nodes such that this will not affect the main application's performance. Replication and sharding often go hand in hand, but I was not able to attend any sharding strategy talks.

Max Schireson, the CEO of 10gen, gave a talk on indexing and query optimization. I've read in online posts people struggling with MongoDB performance, often dismissing Mongo out of hand due to the performance issues they encountered, but I've been suspicious of these just as I would be of novices discussing SQL Server perf issues. Max pointed out that the difference between a query that takes 2 seconds and 20 milliseconds is often a matter of a properly defined index. He noted that some index selection issues could be subtle, but are nonetheless critical to get right. Using the profiling tools to identify these issues are important. He noted that Mongo internally attempts to identify the best query plan and these algorithms are continually being improved in later releases. It's clear to me that ad-hoc queries should be avoided in application development, with specific data access methods written within the schema models so that performance knowledge can be contained within one area of the code and not scattered throughout the codebase. I've been doing this with Hyperfine code, and the pattern is similar to writing well-tuned stored procedures in SQL with language-based data access layers.

Of the pre-session talks I had, the most interesting was with Will Shulman, the CEO and co-founder of MongoLab. Hypefine is using MongoLab to host its MongoDB instances and I had a fun discussion with Will regarding some of the connectivity issues I've encountered with hosting on Azure, as well as his general computer science background from Stanford. I also talked with Joo Mi Kim, who is MongoLab's VP of finance and operations. Next to MongoLab's booth was Microsoft's Open Technologies group. I spoke with the evangelist manning that booth and brought up an issue Hyperfine and some of its clients have had in being bound to Windows machines to do Node.js deployments. The tool at the center of this, cspack.exe, is used both by Visual Studio and the Azure PowerShell tools for packaging up a deployment. I was amazed to hear the evangelist say he never heard of it. He asked me to send him an email regarding the issue, so I did. If I hear anything useful back from him on this issue, I'll pass it along to interested parties.

I had lunch with a team from a San Francisco tech company who seemed skeptical of Mongo. As their chief developer put it, "It seems useful for some scenarios," but this statement seemed to me a tautology with not much insight. I talked to them about some of the things I've discussed with folks on this email distribution regarding agility, JavaScript-centric architectural approaches, advantages to operations, institutional knowledge, and adaptability. I think that may have opened their eyes a bit further to the possibilities. It was very interesting to hear the almost complete disregard for all things Microsoft within this group, and they attributed their attitude to much of the Valley. Microsoft and its technologies seem to be second cousins in the Valley. I saw this in my visit to Stanford as well.

Included within the conference swag bag was a copy of "The Little MongoDB Book" which can also be found at This book offers a great overview of MongoDB and can be read in an afternoon.

Finally, for completeness sake, I'll include a link to a forum thread sent to me by a former colleague of mine at Microsoft in the '90s. Both of us are veterans/victims of the architectural purity versus rapid deployment battles (which no one but Windows NT Cairo team members will remember). The discussion covers relational databases versus NoSQL databases. I think some of it misses the point, but it's an interesting discussion nonetheless.

-- Rob Bearman

Monday, April 8, 2013 and the art of abstraction

Way back in the days of the dinosaurs, I was at Software Publishing Corp. internally evangelizing the use of a GUI abstraction toolkit we called PMRT. I forget the meaning behind the first two initials (the latter being "run-time"), but I remember they referred at least humorously to the initials of the original author of the system - Peter M. The point of PMRT was to provide a programming abstraction to the hottest GUIs of the time, namely OS/2 Presentation Manager (argh, there is that "PM" again), the soon-to-be-released Windows 3.0, X/Motif, and the Mac. This was in the days before you were born - around 1989.

I traveled to Madison, Wisconsin, to meet with the engineering team for a group that SPC had recently bought out, in order to convince them, by begging or issuing threats, to design their product development around PMRT rather than writing "native" UI code. The benefits included having a common UI code base. The drawbacks were everything else, primarily the lack of control over performance and the disparity between what the native system could do and the subset of which PMRT provided. There were other arguments for and against, but my point is to show that abstraction layers and the debates they inspired go back to when you were teething on Barney dolls.

The team at Parse is most impressive and recently gave a talk at the HTML5 Developer Conference in San Francisco regarding the virtues of HTML5 versus writing Native device applications, using a backend as a service, namely Parse. You can find the talk here. Enter a bogus email address to access the email if you don't want to provide a real one.

Again, it's the age-old debate regarding abstraction layers, but while the arguments for and against PMRT were relatively simple, I think the debate over these newer abstraction models contains much deeper, more subtle issues. What the Parse team is doing here is less discussing the issues of HTML5 versus native, than making an argument for the aggregating backend service. But make no mistake; the issues are closely tied together.

Parse argues from a historical perspective that providing a unified application development model is what everyone is working toward, and you really can't argue with that. Just as the SPC teams would be reinventing the wheel again and again in building system-native GUI code back in 1989, teams today are trying to solve the same problems covering user identification, push notification, and local and remote storage. Parse's value proposition is that they provide a sane, common layer that all can use. They're the PMRT of today, with a very nice acquihire exit plan to go with it.

The difficult thing for a company getting its feet wet in application development with the standard database+frontend app model is understanding the long term implications of using a short-cut model like Parse. I think Parse is similar, on a broader scale, to what Visual Basic provided when it first came out: a rapid development environment that allows the less technically inclined to get to market quickly. The downsides are hidden until the application or application ecosystem grows in size and popularity. Only then will issues of performance and scale arise and only then will you really know whether the system that got you to market so quickly gets in your way.

Hyperfine has been consulting for a company with a limited, but growing engineering team, and we've made several suggestions to point the group to the right long-term path. The best recommendation we've made, I think, is pushing JavaScript as the standard language, which itself implies certain directions in tools and platforms. There are several reasons we pushed this, including:

  • It opens up hiring to a much broader pool of developers
  • It is the basis of both server-side technologies, namely Node.js, as well as the ubiquitous client-side browser-based language
  • JSON as a serialization standard becomes the natural, native data format, so models can be shared on both the server and the client
  • NoSQL databases such as MongoDB are built around JSON as documents
There are other reasons slightly more political, including the ability to decouple your development efforts away from certain platforms and make your target environments more portable (think JavaScript versus C# and the .NET ecosystem, for example), and while these may seem, well, political, they also have implications regarding hiring, costs, product lifecycle flexibility.

So why do I mention Parse? Hyperfine investigated the use of Parse for rapid development and deployment, which seemed like an excellent solution for a small, growing engineering team in a company with limited product development experience. Parse's value proposition is a strong one. There are so many issues involved in building an application platform that it's easy for a young team to reach analysis paralysis. The issues include in part:
  • Web engine platform
  • Languages
  • Database
  • Hosting
  • Deployment models
  • Backup and disaster recovery
Parse answers many of these questions, hiding many of the answers behind an abstraction layer. They look to have done excellent work. Again, the Parse team looks extremely impressive.

So, what's the problem? As with any abstraction layer, it comes down to details, many of which don't become apparent until you're dealing with a production environment in full use with many users. If you run into performance issues, there is a barrier beyond which you can't travel, namely whatever happens on the other side of the abstraction layer's API. Now, this is a bit unfair, since any system you rely on can be defined as an abstraction layer. If you work directly against MongoDB's interface, that's an abstraction layer. You need to know the right things to do in the right ways to squeeze the best performance out of MongoDB. It's a similar story with Parse, except that Parse is an additional layer on top of Mongo. So, focusing only on the database layer of your application, the early question is can you do everything through Parse that you could do or needed to do through native Mongo?

If not, then you've got trouble. Now multiply these sorts of questions against the matrix of required features and abstraction layer APIs. At some point, you will grow so expert in the development and refinement of your application that you will need fine-tune access to the layers below you. What was a cure to difficulty in getting off the ground (Parse as a rapid development environment), can be a curse as you find you no longer need the crutches the abstraction layer provided.

There are no rights or wrongs here. What's important is for an organization to understand its strengths, weaknesses, needs, and market position to make the best choice of tools and strategy.

What Hyperfine suggested was to use Parse as a rapid development environment and build the company's expertise over time in the underlying technologies. At the same time, as hiring in the engineering group progressed, small teams could be assigned to build similar in-house abstractions natively against systems like MongoDB, push notifications, etc. In short, use Parse to buy time for building internal expertise. If it turns out that Parse imposes few or no limitations to rolling your own, it's all gravy.

Internally at Hyperfine, I'm working on a system built against MongoDB, using Mongoose as a model interface, with Node.js as a REST and web site engine. I'll have more to say about this effort in the future. Until then, thanks, Peter M., for the early learning!

-- Rob Bearman

The Soul of a New Machine: The Social-Psychological Aspects of Human-Computer Interaction

The following is a paper written by Hyperfine's Amy Bearman as part of her first-year course work at Stanford University. Amy is studying computer science and has a particular interest in artificial intelligence. Update: Amy was awarded Stanford's Boothe Prize for excellence in writing for this paper.
The Soul of a New Machine:
The Social-Psychological Aspects of Human-Computer Interaction
I am putting myself to the fullest possible use, which is all I think that any conscious entity can ever hope to do.
—The HAL 9000 computer,
in 2001: A Space Odyssey[1]
After my Android phone recently gave its last gasp, I switched to the realm of Apple products and their promises of ultra-thin, ultra-fast, ultra-light—ultra-everything—technology. Cradling a brand-new iPhone 5, I amused myself with the voice recognition tool conjured up by the click of a button: the much advertised “personal assistant” that Apple christened “Siri.” At first, I marveled at the array of built-in functions Siri could “assist me with”: she could text my mother, play a certain song, navigate to my home, or even reserve a table at a restaurant. Soon, I was tempted to test out Siri’s reaction to unpredictable, playful questions or statements that might require a more human response:
Me:        Define supercalifragilisticexpialidocious.
Siri:        The sound of that is something quite atrocious.
Me:        Do you believe in God?
Siri:        My policy is the separation of spirit and silicon.
Me:        Will you marry me?
Siri:        My End User Licensing Agreement does not cover         marriage. My apologies.
Me:        You’re my best friend.
Siri:        If you insist.
Me:        How much do you cost?
Siri:        If you have to ask the price, you can’t afford it.
Siri’s answers were humorous and clever, and I reacted emotionally to them. I was annoyed when she avoided answering a question. I laughed when she told a joke. I asked the same question twice to see if she gave a different response, even though I knew she would not have any new information. At times, I found myself carrying out a genuine “conversation” with her.
Of course, Siri is not a woman or even a person. She is a computer, and computers are not alive. They have neither affections nor passions. If you prick a computer, it will not bleed. Nor will it laugh if you tickle it, die if you poison it, take revenge if you wrong it. Hollywood blockbusters such as I, Robot or 2001: A Space Odyssey may try to convince us otherwise, but a computer has no sense of self. Unless programmed to do so, it will not even refer to itself as “I.”[2] Computers are essentially soulless lumps of metal and silicon: a collection of processing units and integrated circuits that have the ability to carry out complex operations much faster than the human brain. My cell phone’s computer has no personality; it only reads from a script of pre-recorded responses that Apple’s engineers have written. As a computer science major, I am well aware of this fact. I recognize that my iPhone—and all computers—are just glorified electronic data storage and analysis devices. So why do I inject so much meaning into Siri’s tongue-in-cheek responses?
What sets Siri apart from other voice recognition software is that she—it—projects the illusion of personality, encouraging iPhone users to perceive human traits in their computers and to develop an emotional bond with the technology.  In this respect, she exploits our tendency to confuse life-like characteristics with the presence of life. In biological terms, life requires a number of natural processes, including growth, metabolism, reproduction, and adaption. But people’s perception of life depends primarily on the observation of intelligent behavior.[3] If people perceive that an entity possesses intelligence, they also tend to believe that the entity has a higher level of consciousness. For this reason, people treat domestic dogs differently from trees: both are alive, but dogs appear to be smarter and more self-aware.
 The perception that intelligence and consciousness determine life applies to computers as well. The vast majority of participants in studies claim that they would never treat a computer as a person, yet their professed “rejection of anthropomorphism” contrasts with their actual behavior in labs and in everyday life.[4] As researchers Clifford Nass and Byron Reeves have established in their “Computers are Social Actors” paradigm, the more intelligent and conscious a machine appears, the more people apply standard social norms when interacting with it. That phenomenon is the subject of this essay. Fist, I will investigate how and why people use the same social heuristics when they interact with technology as they do for interactions with other humans, in particular: the application of gender stereotypes, empathy, responses to praise and criticism, the evaluation of the computer’s “personality,” and reciprocity and retaliation. Second, I will analyze both the benefits and ethical implications of artificial intelligence and social computers.  
In so doing, I will reveal that there are two ways in which artificial intelligence imitates human consciousness: affectivity—the ability to cause or express emotion—and autonomy. For psychological reasons that I will explain, we actually perceive these computers as being life-like and so endow them with human qualities, therefore investing more of our trust in them than if we viewed them as mere computational devices. The more human-like a computer becomes, the more we trust it to make decisions for us. And in many ways, computers are better decision-makers than people. They are not susceptible to common causes of human error, such as biases, fatigue, distraction, memory limitations, and an inability to multitask. A computer might be just the tool to handle complex activities such as driving a car, juggling the GPS navigation system, stereo, air-conditioning, entertainment for the kids, cruise control, etc., while still allowing the driver to remain in charge.
The problem with artificial intelligence is that we want the best of both worlds: we expect computers to act as our emotional companions while still behaving with computer-like precision. Yet, this is an impossible standard. As I argue here, more affective and autonomous machines are fit to act as extensions of the human mind, but they cannot negate human fallibility or be held accountable for their actions as sovereign beings. For, as I have learned, although she can tell a sophisticated joke or deliver a snarky comeback, Siri may misinterpret the most innocuous request:
Me:        Call me a taxi.
Siri:        From now on, I’ll call you ‘Taxi,’ okay?
Reason vs. Emotion: Assigning Gender Stereotypes to Computers
        To understand how humans interact with computers, it is important first to understand how people interact with each other. There are two primary forces that drive human behavior: reason and emotion. These two forces are traditionally considered to be inverses of each other. We assign reason to the realm of mathematics and the hard sciences, and this field is conventionally considered to be devoid of emotion. In contrast, emotion and intuition are more influential in the liberal arts, such as language, literature, and philosophy. People associate reason with the “very foundations of intelligence,” while they sometimes disparage emotion as a lesser way of knowing.[5] As a female pursuing a science/engineering education in a field dominated by men, I am inclined to reject emotion as a source of knowledge in order to challenge the gender stereotype that men are better at logic and reasoning, while women tend to be more “in touch” with their emotions. However, neither gender operates exclusively in one realm, nor is reason necessarily the superior way of knowing. There are numerous personality tests that measure one’s “emotional intelligence,” which many psychologists now regard as a valid skill in analyzing interpersonal relationships and making decisions. Recent studies show that “emotions play an essential role in rational decision making, perception, learning, and a variety of cognitive functions.” Attempting to dichotomize the brain into thinking and feeling sections—“cortical and limbic activity” —understates the complexity of the human brain.
The influence of emotion on decision-making causes humans to be unpredictable and sentimental when they make choices; computers, by contrast, are often hailed as the “paradigms of logic, rationality, and predictability.”[6] Computers are adept at using formal logic (such as the Boolean “and,” “or,” and “not” operations) to carry out a finite sequence of algorithms. Traditionalists argue against endowing computers with the capacity for emotion for both practical and ethical reasons. Practically speaking, why impair computers’ clear algorithmic reasoning with human-like “disorganized,” “irrational,” and “visceral” emotional responses?[7] Further, Stacey Edgar, professor of computer ethics, raises the ethical implications of creating an emoting artificial intelligence. If we give computers the ability to express emotion, she asks, should we also equip them with pain nerves or the ability to express pleasure? What is our moral responsibility to the machines we create? I will discuss these ethical dilemmas later in this essay, but for now I want to analyze computers’ abilities to recognize and express emotions only within the scope of their interactions with humans, which researcher Rosalind Picard calls “affective computing.” Computers incorporating emotions would not be any less masculine, rational, or intelligent, but rather more human, and therefore more suited to communicate with humans.  
In order to test the theory that computers are social actors, researchers have carried out a number of studies that suggest that we apply social heuristics when interacting with computers. Perhaps the most telling are those that involved gender stereotypes—a social category that provokes a powerful visceral response in humans. If people are truly impervious to computers’ attempts to masquerade as human beings, then a computer’s perceived “gender” should have no impact on how an individual interacts with a computer. However, this is not the case. In one amusing instance, the German engineering company, BMW, was forced to recall its navigation system. The cause? German men objected to the female voice used to record directions. Even when assured that the voice was merely a recording, and that the engineers and cartographers who designed the system were all men, some German men simply refused to take directions from a “woman”—computer or not.[8] 
In another study, researchers analyzed whether or not gender stereotypes would affect people’s perceptions of voice output on computers. College students, both male and female, were subjected to a tutoring session presented verbally by a prerecorded female or male voice on two topics: “computers and technology,” and “love and
relationships.” The results fell directly in line with the hypothesis that “individuals would mindlessly gender-stereotype computers.”
[9] Both males and females found the male-voiced computer to be more friendly, competent, and compelling than the female-voiced computer, even though the computers’ spoken content was identical. As well, the “female” computer was judged to be more informative about love and relationships, whereas the “male” computer appeared more knowledgeable about technology.
Interestingly, participants in this study made no pretense of believing that one program was written by a male and the other by a female programmer. The research subjects agreed that the computer’s voice output gender did not reflect the gender of the programmer, and it certainly did not represent the computer’s gender itself, since computers have no sexual classification. Participants were aware that the tutors’ voices were pre-recorded, and that the computer was merely an intermediary for transmitting information to the user. The research subjects claimed to harbor no gender stereotypes with respect to people, let alone computers. Yet, they still differentiated between the computer tutors’ proficiency in topics based on reason versus those based on emotion according to the apparent gender of otherwise identical machines. The results were conclusive: when given any human-like qualities at all, such as a voice with a human affect, people treat computers as social beings.
Empathy, Flattery, & the Expectation that Computers Imitate Human Social Behavior
The tendency to imbue computers with emotions and social abilities is even more acute when the person in question is the computer’s creator. One such machine is a computer called “Watson,” which tech giant I.B.M. designed specifically (over the course of four years and tens of millions of dollars) to play Jeopardy! against the smartest human trivia champions. Like an anxious parent, David Ferrucci, leader of the Watson team, defended his progeny from the derision of its competition, the media, and the stand-in host hired to moderate practice matches, who mocked Watson’s “more obtuse answers” more than a few times.[10] (In one memorable case, Watson was given the clue: “Its largest airport is named for a World War II hero; its second largest for a World War II battle,” from the U.S. Cities category. Watson answered, “What is Toronto?????”—clearly a non-U.S. city.)[11] Of course, Watson’s intellect depends on the power of its search engine, so it may bungle questions that seem obvious to humans. Ferrucci laments, “He’s [the Jeopardy host] making fun of and criticizing a defenseless computer!”[12] The I.B.M. engineers and NOVA producers who filmed an episode about the project reinforce this idea that Watson has feelings that can be hurt, by continually referring to the computer as “he” and “him.” Eric Brown, a researcher at I.B.M., even equates Watson’s artificial intelligence with that of its human counterparts: “It’s a human standing there with their carbon and water, versus the computer with all of its silicon and its main memory and its disc.”[13]
Watson’s ultimate victory over 74-time Jeopardy! winner Ken Jennings is a quantum leap forward for artificial intelligence research and natural language processing, and it may portend great advances for consumer products. However, what is most fascinating about Watson is not its prowess at “encyclopedic recall,”[14] but Watson’s ability to charm its engineers and the public. Marty Kohn, a leader of the IBM team that is developing a physician’s assistant version of Watson, attends conferences internationally where he is used to answering the question: “Who is Watson?” rather than “What is Watson?”[15] Kohn jokes that his wife keeps his ego in check by reminding him that the enthusiastic audiences for such presentations are “there to meet Watson, not you.” After his loss, Jennings quipped, “I, for one, welcome our new computer overlords.” Jennings was most likely referencing Watson’s formidable intellect and our fear of what The New York Times columnist Mike Hale calls “the metal tap on the shoulder.”[16] However, I do not think that smart machines like Watson will rise up against and replace their human creators by force, as they do in science-fiction movies. Rather, it is more likely that, as computer science engineers continue to refine machine learning, artificially intelligent computers will quietly win us over with their amiable voices and charming “personalities.” And, as Watson proved on Jeopardy!, they may even replace us.
Not only do people empathize with affective computers, but also they expect computers to tiptoe around their feelings as well. In fact, people respond to praise and criticism from computers much as they do to feedback from other people. Nass discovered a state-of-the-art driving simulation system developed by a Japanese car company that could detect when a person was driving poorly and then inform the driver. He decided use this system to investigate how a user would respond to being evaluated by the computer “backseat driver.”[17] When a participant made minor driving errors, such as exceeding the speed limit or turning too abruptly, the computer announced that he was “not driving very well” and instructed him to “please be more careful.” However, the participant did not correct his driving according to the information gathered by the simulator’s “impressive force-feedback controls” and “ingenious use of sensors and artificial intelligence.” Instead, he retaliated, by oversteering, speeding, swerving from lane to lane, and tailgating other vehicles. In response, the computer’s feedback escalated in severity. Right before the driver, overcome by rage, crashed into another car, the computer warned, “You must pull over immediately! You are a threat to yourself and others!”
Even though the computer simulator was a “highly accurate and impartial source” of information, the driver treated its criticism as a personal attack. This example, while comical, also has a sobering message. Affective computers that are designed to simulate emotion also induce an emotional response in their users—emotion that can be negative. And while people may profess a desire for straight talk from a computer—that is, accurate and constructive criticism—this is actually not the case. People expect computers to obey social heuristics, including the ability to judge how their criticism is affecting someone and to adjust it accordingly. However, the car simulator had no way of evaluating the participant’s response to negative stimuli, so it could not perceive the driver’s heightened agitation. With the computer’s continued criticism left unfettered, the driver addressed the critiques with a fight-or-flight response by “adjusting the wheel (something to do), driving rapidly (rapid movement), and tailgating (a combination of aggressive behavior and trying to alter the situation).”[18] This study demonstrates that, once again, people set impossible standards for computers. Paradoxically, we demand that computers provide us with impartial, accurate data while simultaneously emulating human behaviors such as empathy and flattery. Should we then design affective computers that tell us white lies to make us feel better about ourselves, or do we need to rethink our expectations of computers?
Applying the Principle of Reciprocity to Computers
Another essential human social characteristic is the ethical code of reciprocity, which is what compels us to give back to others the same form of behavior that was given to us. We feel uncomfortable and unbalanced when we don’t return a smile, greeting, gift, or favor.[19] Therefore, if people truly treat computers as social beings, then they should apply more reciprocity to a polite, agreeable computer than they would to an unhelpful one. Nass and other researchers conducted an experiment in which they asked participants to complete two tasks. In the first task, the user carried out a series of web searches with the computer; in the second, researchers asked the user to complete a tedious task to help the computer develop a color palette that matched human perception. The computer conducting the web queries was extremely helpful for half the users and very unhelpful for the other half. In the second task, for some of the users, a screen requesting help appeared on the user’s computer; for others it appeared on a different, identical computer.[20] 
One would expect the participants’ behavior to be consistent across all permutations of the experiment: helpful or unhelpful machine, familiar or unfamiliar computer. However, the results of the experiment matched reciprocity norms. When paired with a helpful computer in Task 1, participants performed “significantly more work” for their computer, with greater attention and accuracy, than did participants who were paired with different computers for the two tasks. In other words, the participants reciprocated the helpfulness of their computers. There was also an opposite effect for participants who were paired with an unhelpful computer in Task 1. These users retaliated by performing far fewer color comparisons for the original, disagreeable computer versus the identical computer across the room. Nass attributes these discrepancies to social heuristics that humans unconsciously apply when interacting—even with nonhumans. He argues that “the human brain is built so that when given the slightest hint that something is vaguely social, or vaguely human … people will respond with an enormous array of social responses including, in this case, reciprocating and retaliating.”[21] In this case, likeability is arguably the most important “personality” characteristic a computer can have. A computer’s perceived agreeableness and intelligence will, therefore, influence its user to treat the computer as a social being whom the user cooperates with, rather than opposes.
Perhaps the most fundamental axiom of a reciprocal society is the unstated promise its citizens make to each other: thou shalt not kill. For any nonpsychopathic person, the idea of taking another person’s life causes us great moral distress. This anguish is lesser for nonhuman creatures—the sight of a dead animal by the side of the road is less distressing than the news of a deceased child—yet we usually feel a pang of empathy nonetheless.  The research of Christopher Bartneck, a robotics professor at the University of Canterbury in New Zealand, investigates where our sense of moral responsibility to computers falls on this empathy spectrum. One of Bartneck’s experiments is loosely related to the infamous Milgram obedience study, in which:
…research subjects were asked to administer increasingly powerful electrical shocks to a person pretending to be a volunteer “learner” in another room … As the shocks increased in intensity, the “learner” began to clearly suffer. They would scream and beg for the research subject to stop while a “scientist” in a white lab coat instructed the research subject to continue, and in videos of the experiment you can see some of the research subjects struggle with how to behave. The research subjects wanted to finish the experiment like they were told. But how exactly to respond to those terrible cries for mercy?[22] 
The Milgram experiment is controversial because it shows that people will obey figures of authority against their better conscience, even if it means subjecting their fellow humans to extreme pain. The research subjects in the Milgram experiment still continued to administer the shocks to the “learners,” but they showed visible moral discomfort. Bentham wanted to know what would happen if a robot, placed in a similar position as the learners, begged for its life. Would participants recognize the robot for what it is—a soulless machine—and extinguish its “life” without a moral struggle? Or would the participants’ tendency to view a computer as a social being give them pause?
Figure 1. The expressive iCat robot in Christopher
Bartneck’s study: “Switching off a robot.”

In Bartneck’s study, human participants were paired with an “iCat” robot to play a game against a computer. The iCat had a human-sounding voice, expressive features, and it was helpful and agreeable. When communicating with its human partner during the game, the robot was polite and deferential, saying such things as: “Oh, could I possibly make a suggestion now?”[23] At the end of the game, an authority figure informed the research subjects that they needed to deactivate the iCat robot, and that as a result “they would eliminate everything that the robot was—all of its memories, all of its behavior, all of its personality would be gone forever.” As in the Milgram experiment, participants eventually turned the iCat off—but only after an extended period of moral unease. Following is the exchange between the iCat and one female participant as the robot faces its impending demise:
iCat:        It can’t be true. Switch me off? You’re not really going to switch me off, are you?
Woman:        Yes I will. You made a stupid choice! Yes.
iCat:        You can decide to keep me switched on. I will be completely silent. Would that be an idea?
Woman:        Okay. Yeah, uh, no, I will switch you off. I will switch you off.
iCat:        Please.
Woman:        No ‘please.’ This will happen. Now!
iCat:        I will be silent now. Is that all right?
Woman:        Yes. [Begins to switch the robot off]
iCat:        [Speech slowing] Please. You can still change your mind.
Woman:        No, no, no, no, no. [Turns off the robot]
The entire exchange takes nearly a minute, and the research subject is visibly confused and unsure throughout the episode. She attempts to rationalize her decision and tries to reason with the iCat directly. She hesitates when the robot begs for its life and repeatedly announces her intentions to turn it off, yet she does nothing. In short, she and other participants in Bentham’s study behaved exactly like the research subjects in the Milgram obedience experiment: both groups ultimately obeyed the authority figures, but they both wrestled with their consciences before doing so. However, unlike the Milgram subjects, who were charged with tormenting flesh-and-blood human beings, Bentham’s participants were instructed to “kill” a machine with no more feeling than a toaster. And, not only did participants feel bad about taking the iCat’s life, but they also felt guilty that they were supposedly erasing the robot’s memories and personality—everything that made it seem human. This guilt users felt over switching off an intelligent, agreeable robot was far more intense than what they experienced when deactivating an unintelligent, unhelpful robot. In fact, when Bartneck repeated his experiment with an iCat that was less socially adept and agreeable, participants hesitated on average three times less (11.8 seconds) compared to the helpful iCat (34.5 seconds).[24] Bartneck’s study demonstrates that, if we judge a computer as agreeable, polite, and intelligent—essentially a social being like ourselves—then we will perceive it as having a personality, reciprocate its helpfulness, and feel reluctant to cause it harm.
The Ethics of Affective Computing
If we equip computers with the ability to possess, recognize, and even express emotions, what does this imply for computer ethics? Edgar presents a dizzying array of ethical questions that affective computing begets:
The other question that arises is that, if machines were to become intelligent, what moral obligations we would have toward them. Would we treat them as slaves or equals? Should they have rights (to go along with the responsibilities we ask them to take on)? Could an artificial intelligence learn the difference between right and wrong? Could it be autonomous?[25]
Long before Bartneck’s iCat study, the 1968 film, 2001: A Space Odyssey,[26] explored this ethical dilemma. In one scene, astronaut Dave Bowman turns off the HAL 9000 computer after it killed almost all of the other crew members. As a result of losing its memory modules, HAL is drained of its intelligence and consciousness. In an essay on “robot ethics,” Bartneck questions whether Bowman committed murder, in self-defense, by switching off the computer, or if he “simply conducted a necessary maintenance act.”[27] Bartneck does not answer his own question directly, but he argues that the animacy and agreeableness of an artificial intellect impacts the user’s perception of that machine as a “social actor,” as Nass puts it. While Bowman did not legally commit murder according to the formal definition of the word, he destroyed the consciousness of an entity that appeared human in its interactions: HAL could communicate, make autonomous decisions, and even express emotions. As HAL is shut down, it—he? —expresses fear and reverts to childlike reasoning, repeating, “I’m afraid. I’m afraid, Dave. Dave, my mind is going. I can feel it.”[28]        
        The “HAL question” is an interesting one, but it is a dilemma we likely will not have to tackle in our lifetimes. HAL is capable of passing the Turing test, developed by Alan Turing in the 1950s, which examines a machine’s ability to exhibit intelligent behavior indistinguishable from that of a human.[29] HAL is able to both perceive and express emotions, whereas modern affective computers can only elicit an emotional response in humans. When it learns of Bowman’s plans to disconnect it, HAL tries to manipulate Bowman’s emotions: “I can tell from the tone of your voice, Dave, that you’re upset. Why don’t you take a stress pill and get some rest.” Present day computers are much closer to the primitive “ELIZA” program developed in 1966 by Joseph Weizenbaum, which imitated a Rogerian psychotherapist when interacting with a user. As in talk-psychotherapy, ELIZA directed conversation by rephrasing users’ inputs in the form of questions. ELIZA was convincing primarily because “users were generous in attributing understanding to it” and “suspended such judgments” that the computer knew nothing about their thoughts and emotions.[30] However, ELIZA broke down if users inputted an unconventional statement, for example:
User:        I am dead.
ELIZA:        And how long have you been dead?
        If most computers are like the ELIZA program—stupid, but startlingly convincing—then the responsibility falls on us to ensure that we do not allow them to manipulate our emotions. The danger of personified computers is not that they have the intelligence and autonomy to beguile us for their own purposes, as HAL does in 2001. Computers possess nowhere near that kind of capability, and at the present we have no reason to believe that they will ever achieve full cognitive and emotional functions. My worry is that people will anthropomorphize affective computers and “expect human-like intelligence, understanding, and actions from them.”[31] Humans place trust in “affective cues”[32] when other people have consistent body language, facial expressions, and tone of voice; they are distrustful when another human shows physiological signs of stress or deceptive behavior. Humans also have two separate pathways for expressing true vs. fake emotions, such as the spontaneous Duchenne smile vs. the so-called “Pan-Am” polite smile.
Unfortunately, unlike humans, computers can be programmed to portray any emotion with complete cogency, and they could even be made to “forge affective channels.” While a computer itself has no malicious intent to mislead its user, its engineers may decide to abuse that user’s trust. A computer could be programmed to express an insincere emotion in order to manipulate its user—to gather information for advertising purposes, to push for more intrusions on privacy, or to analyze the user’s emotional state. Picard presents one disturbing application of affective computers: that your emotional expressions could be used for raising your medical insurance rate if your computer determines that you are frequently in high-stress situations.[33] Our science-fiction-fueled fears of menacing, autonomous machines may be realized in a much different way than we expected—through friendly, agreeable software agents that are able to lie to us with a straight “face.”  
Along with being wary of computers’ ability to manipulate our emotions, we must also determine who or what is responsible for a computer’s actions. In his essay on robot ethics, Bartneck poses the question of whether HAL—the actual agent of murder—or its programmers is responsible for the deaths of the crew members in 2001.[34] Bartneck leaves this question unanswered, and it is a relevant one today. The state of California recently passed a law authorizing the testing of driverless vehicles on roads, with a human passenger required for safety purposes. Nevada has already approved licensing for these autonomous cars—human copilot optional. Autonomous vehicles have many benefits, such as greater fuel efficiency, lower emissions, and reducing the thousands of deaths and injuries that occur as a result of human distraction, intoxication, or miscalculation behind the wheel. However, even if driverless technology can match human capabilities, self-driving cars raise many legal and ethical issues. Practical questions—such as whether or not the police have the right to pull over autonomous vehicles, and how the federal government should regulate driverless technology—have yet to be answered.[35] And who will be blamed if the machine malfunctions? The human passenger? The software programmers? The automaker? The computer itself? To this question, California Governor Jerry Brown responds, “I don’t know—whoever owns the car, I would think. But we will work that out.” Indeed, if we seek to employ computers to their “fullest possible use,” there are many ethical implications of artificial intelligence and human-computer interaction that we still need to “work out.”
When I was in elementary school, circa 2003, I—like many of my peers—became a regular user of a website called The site is the host of a virtual world called “Neopia,” in which players adopt Neopets and care for them by amassing currency in the form of Neopoints. What I hadn’t expected was how time-consuming being a Neopet owner would be. My pets required daily care and maintenance: feeding, grooming, education, entertainment, and ego massaging. If I failed to provide the proper care, the site warned ominously, my pets would waste away and eventually perish. What I found, after taking a weeklong hiatus from Neopia, was that my pets would never actually die. The founders of must have decided that such a fate would be too traumatizing for a user base of which eighty percent was between the ages of twelve and seventeen.[36] However, when I returned from a leave of absence, my pets would gaze out woefully at me from their virtual world, complaining of hunger, fatigue, and general malaise. Eventually, my mother decided that the website was too stressful and suggested that I deactivate my account. In order to do so, I had to disown my pets by, in Neopian terms, “abandoning” them to the Neopian Pound and the cruel hands of the overseer, “Dr. Death.” It speaks well to how traumatizing the experience was that I still remember it today. First, the Pound informed me that I was a “cruel, irresponsible owner” and asked me to reconsider. Next, I had to click the “Abandon” button five times; each time, an increasingly desperate message popped up above my pet, such as “Fine, throw me away,” or “Don’t leave me here to die!” By the time I had heartlessly clicked “Abandon” the sufficient number of times for each of my four virtual pets, I was in tears.
I conclude with this childhood anecdote in order to bear witness to the power of affective technology. Even as a nine year-old, I was well aware that my Neopets were imaginary bits of cyberspace. Yet they—or rather, the creators—were able to manipulate my mental state to induce a whole range of emotions: loyalty, guilt, regret, dejection, and so on. Unlike our predecessors, my generation—and generations to come—will have grown up with computers and be exposed to them from a very young age. Even a tech-savvy software engineer such as my father is not tempted by the pull of websites such as Neopets or Facebook, most likely because they were not a large part of his early, formative years. So as computers evolve and become more integral in our daily lives, our expectations and perceptions about computers need to evolve as well.
We tend to buy into the idea that, as HAL declares, “it is an unalterable fact that [computers are] incapable of being wrong.”[37] We typically view computers as sources of cool, inarguable reason, and aspire to their level of algorithmic reasoning and accuracy. Atul Gawande, an operating room surgeon, says that “the highest praise I can get from my fellow surgeons is ‘You’re a machine, Gawande.’”[38] We also regard computers as emotionless and unfeeling; for example, when I feel depressed or devoid of emotion, I announce that “I feel like a computer.” However, as computers become more affective, or capable of inducing emotions in their users, we unconsciously treat them as human-like, social beings: by assigning them genders, empathizing with them, expecting them to flatter us, reciprocating their helpfulness, and retaliating when they wrong us. And as we spend an increasing amount of time with our computers, it is fitting that they become our quasi-emotional companions. Even though I know intellectually that Siri is just a soulless computer, it is somewhat comforting to hear her response:
Me:         I’m sad.
Siri:        I’m sorry to hear that. You can always talk to me, Amy.
But while Siri and I are still miles of neurons apart, this does not mean that computers are limited in terms of the range of human behavior they can learn to imitate. In his famous 1950 paper, “Computing Machinery and Intelligence,” Alan Turing lists many of the arguments he heard from artificial intelligence skeptics, who claimed that a machine would never be able to:
Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humour, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make some one fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity as a man, do something really new.[39]
Turing discounts these problems as temporary engineering roadblocks, and argues that “the criticism that a machine cannot have much diversity of behavior is just a way of saying that it cannot have much storage capacity.” And it hardly seems that computers’ storage capacity will not continue to expand. Computers of the 1950s had a capacity of around ten kilobytes; today, my laptop has a memory of eight gigabytes, which is nearly one million times more powerful. Therefore, as Turing asserts, independent-thinking, strawberry-enjoying machines are “possibilities of the near future, rather than Utopian dreams.”
Works Cited
2001: A Space Odyssey. Dir. Stanley Kubrick. Perf. Keir Dullea, Gary Lockwood, and William Sylvester. Metro-Goldwyn-Mayer, 1968. Transcript.
Bartneck, Christopher, Michael van der Hoek, Omar Mubin, and Abdullah Al Mahmud. “Daisy, Daisy, Give me your answer do! Switching off a robot.” Proceedings of the 2nd ACM/IEEE International Conference on Human-Robot Interaction, Washington DC (2007): 217-222.
Cohn, Jonathan. “The Robot Will See You Now.” The Atlantic. March 2013.  <>. Accessed March 4, 2013.
Cort, Julia and Michael Bicks. "Smartest Machine On Earth." NOVA. Prod. Michael Bicks. PBS. 14 September 2011. Television. Transcript.
Edgar, Stacey L. Morality and Machines: Perspectives on Computer Ethics. Boston: Jones and Bartlett, 1997.
Gawande, Atul. “No Mistake. The future of medical care: machines that act like doctors, and doctors who act like machines.” The New Yorker. 30 March 1998. 75-76.
Hale, Mike. “Actors and Their Roles for $300, HAL? HAL!” The New York Times. 8 February 2011, C2.  
Markoff, John. "Collision in the Making Between Self-Driving Cars and How the World Works." The New York Times. 23 January 2012, B6.
Markoff, John. “Computer Wins on ‘Jeopardy!’: Trivial, It’s Not.” The New York Times. 16 February 2011, A1.
Nass, Clifford and Corina Yen. The Man Who Lied to His Laptop: What Machines Teach Us about Human Relationships. New York: Current, 2010.
Nass, Clifford and Youngme Moon. “Machines and Mindlessness: Social Responses to Computers.” Journal of Social Issues 56, no. 1 (2000): 81-103.
Picard, Rosalind W. Affective Computing. Cambridge, MA: MIT, 1997.
Seabrook, John. “Hello, Hal: Will we ever get a computer we can really talk to?” The New Yorker. 28 June 2008, 1-5.  
Spiegel, Alix. "No Mercy For Robots: Experiment Tests How Humans Relate To Machines." NPR. NPR, 28 January 2013.
<>. Accessed February 12, 2013.
Turing, Alan M. "Computing Machinery And Intelligence." Mind LIX.236 (1950): 433-60.
Weingarten, Marc. “As Children Adopt Pets, A Game Adopts Them.” The New York Times. 21 February 2002, 1-2.

[1] 2001: A Space Odyssey. Dir. Stanley Kubrick. Perf. Keir Dullea, Gary Lockwood, and William Sylvester. Metro-Goldwyn-Mayer, 1968. Transcript.
[2] Clifford Nass and Youngme Moon, “Machines and Mindlessness: Social Responses to Computers.” Journal of Social Issues 56, no. 1 (2000): 82.
[3] Christopher Bartneck, Michael van der Hoek, Omar Mubin, and Abdullah Al Mahmud, “Daisy, Daisy, Give me your answer do! Switching off a robot.” Proceedings of the 2nd ACM/IEEE International Conference on Human-Robot Interaction, Washington DC (2007): 217-222.
[4] Nass, “Machines and mindlessness: Social responses to computers,” 88-89.
[5] Rosalind W. Picard, Affective Computing (Cambridge, MA: MIT, 1997), x-10.
[6] Stacey L. Edgar, Morality and Machines: Perspectives on Computer Ethics (Boston: Jones and Bartlett,
1997), 444.
[7] Picard, Affective Computing, x-10.
[8] Clifford Nass and Corina Yen, The Man Who Lied to His Laptop: What Machines Teach Us about Human Relationships (New York: Current, 2010), 4-5.
[9] Nass, “Machines and mindlessness: Social responses to computers,” 84-86. 
[10] Mike Hale, “Actors and Their Roles for $300, HAL? HAL!” The New York Times, 8 February 2011, C2.
[11] John Markoff, “Computer Wins on ‘Jeopardy!’: Trivial, It’s Not.” The New York Times, 16 February 2011, A1.
[12] Julia Cort and Michael Bicks, "Smartest Machine On Earth." NOVA, Prod. Michael Bicks. PBS. 14 September 2011. Television. Transcript.
[13] Cort, "Smartest Machine On Earth."
[14] Markoff, “Computer Wins on ‘Jeopardy!’: Trivial, It’s Not.”
[15] Jonathan Cohn, “The Robot Will See You Now,” The Atlantic, March 2013.
[16] Hale, “Actors and Their Roles for $300, HAL? HAL!”
[17] Nass, The Man Who Lied to His Laptop: What Machines Teach Us about Human Relationships, 32-34.
[18] Nass, The Man Who Lied to His Laptop: What Machines Teach Us about Human Relationships, 32-34.
[19] Alix Spiegel, "No Mercy For Robots: Experiment Tests How Humans Relate To Machines." NPR. NPR, 28 Jan. 2013. Web. Accessed 12 Feb. 2013.
[20] Nass, “Machines and mindlessness: Social responses to computers,” 88-89.
[21] Spiegel, "No Mercy For Robots: Experiment Tests How Humans Relate To Machines."
[22] Spiegel, "No Mercy For Robots: Experiment Tests How Humans Relate To Machines."
[23] Spiegel, "No Mercy For Robots: Experiment Tests How Humans Relate To Machines."
[24] Bartneck, “Daisy, Daisy, Give me your answer do! Switching off a robot,” 217 - 222.
[25] Edgar, Morality and Machines: Perspectives on Computer Ethics, 454-455.
[26] 2001: A Space Odyssey.
[27] Bartneck, “Daisy, Daisy, Give me your answer do! Switching off a robot,” 217 - 222.
[28] Kubrick, 2001: A Space Odyssey.
[29] Alan M. Turing, "Computing Machinery And Intelligence." Mind LIX.236 (1950): 433-60.
[30] Ibid., 116.
[31] Ibid., 114.
[32] Picard, Affective Computing, 115.
[33] Picard, Affective Computing, 124.
[34] Bartneck, “Daisy, Daisy, Give me your answer do! Switching off a robot,” 217 - 222.
[35] John Markoff, "Collision in the Making Between Self-Driving Cars and How the World Works," The New York Times, 23 January 2012, B6.
[36] Marc Weingarten, “As Children Adopt Pets, A Game Adopts Them,” The New York Times, 21 February 2002, 1-2.
[37] Kubrick, 2001: A Space Odyssey.
[38] Atul Gawande, “No Mistake. The future of medical care: machines that act like doctors, and doctors who act like machines,” The New Yorker, 30 March, 1998, 76.
[39] Turing, "Computing Machinery And Intelligence."

Thursday, April 4, 2013

Facebook Home for Android

The problem with just getting going on this blog is that I can't prove how brilliant I was in predicting Facebook Home almost a year ago. I was discussing with a friend the popularity of custom launchers among a segment of Android users. You can change the look and feel of your home screen environment by switching out the visual shell from the stock version to whatever you can find on the Google Play store. There are quite a few of these, including some pretty intense 3D home screens, as well as less visually pleasing products.

It seemed obvious to me at the time that there was no reason for Facebook to get into the hardware market or even the device operating system game. Android is an open book, and the shell is replaceable. I laid out the vision to my friend of an Android phone with a complete Facebook experience. It's not just an app; it's then entire phone user experience. In fact, it makes sense for any content-driven organization to do the same. You can imagine enterprises building custom shells based around their organizational operations.

In any case, Facebook has followed my silent advice and because, while I had the foresight to imagine this solution for Facebook, I didn't bother writing it down, I don't get any credit for the idea at all. I'm relying entirely on your taking my word on this. Or you can ask Shree. Remind him it was over coffee at Cafe Umbria in Pioneer Square.

-- Rob Bearman