Social and cultural costs of media multitasking

Today I’m attending the Media Multitasking workshop at Stanford. I’m going to just blog as I go, so these posts are going to perhaps be a bit rougher than usual.1

The workshop began with a short keynote from Patricia Greenfield, a psychology professor at UCLA, about the costs and benefits of media multitasking. Greenfield’s presentation struck me as representing as an essentially conservative and even alarmist perspective on media multitasking.

Exemplifying this perspective was Greenfield’s claim that media multitasking (by children) is disrupting family rituals and privileging peer interaction over interaction with family. Greenfield mixed in some examples of how having a personal mobile phone allows teens to interact with peers without their parents being in the loop (e.g., aware of who their children’s interaction partners are). These examples don’t strike me as particularly central to understanding media multitasking; instead, they highlight the pervasive alarmism about new media and remind me of how “helicopter parents'” extreme control of their children’s physical co-presence with others is also a change from “how things used to be”.

Face-to-face vs. mediated

The relationship of these worries about mobile phones and the allegedly decreasing control that parents have over their children’s social interaction to media multitasking is that mediated communication is being privileged over face-to-face interaction. Greenfield proposed that face-to-face interaction suffers from media use and media multi-tasking, and that this is worrisome because we have evolved for face-to-face interaction. She commented that face-to-face interaction enables empathy; there is an implicit contrast here with mediated interaction, but I’m not sure it is so obvious that mediated communication doesn’t enable empathy — including empathizing with targets that one would otherwise not encounter face-to-face and experiencing a persistent shared perspective with close, but distant, others (e.g., parents and college student children).

Family reunion

Greenfield cited a study of 30 homes in which children and a non-working parent only greeted the other parent returning home from work about one third of the time (Ochs et al., 2006), arguing — as I understood it — that this is symptomatic of a deprioritization of face-to-face interaction.

As another participant pointed out, this could also — if not in these particular cases, then likely in others — be a case of not feeling apart during the working day: that is, we can ask, are the children and non-working parents communicating with the parent during the workday? In fact, Ochs et al. (2006, pp. 403-4) presents an example of such a reunion (between husband and wife in this case) in which the participants have been in contact by mobile phone, and the conversation picks up where it left off (with the addition of some new information available by being present in the home).

Next

I’m looking forward to the rest of the workshop. I think one clear theme of the workshop is going to be differing emphasis on costs and benefits of media multitasking of different types. I expect Greenfield’s “doom and gloom” will continue to be contrasted with other perspectives — some of which already came up.

References

Ochs, E., Graesch, A. P., Mittmann, A., Bradbury, T., & Repetti, R. (2006). Video ethnography and ethnoarchaeological tracking. The Work and Family Handbook: Multi-Disciplinary Perspective, Methods, and Approaches, 387–409.

  1. Which also means I’m multitasking, in some senses, through the whole conference. []

Self-verification strategies in human–computer interaction

People believe many things about themselves. Having an accurate view of oneself is valuable because it can be used to generate both expectations that will be fulfilled and plans that can be successfully executed. But in being cognitively limited agents, there is pressure for us humans to not only have accurate self-views, but to have efficient ones.

In his new book, How We Get Along, philosopher David Velleman puts it this way:

At one extreme, I have a way of interpreting myself, a way that I want you to interpret me, a way that I think you do interpret me, a way that I think you suspect me of wanting you to interpret me, a way that I think you suspect me of thinking you do interpret me, and so on, each of these interpretations being distinct from all the others, and all of them being somehow crammed into my self-conception. At the other extreme, there is just one interpretation of me, which is common property between us, in that we not only hold it but interpret one another as holding it, and so on. If my goal is understanding, then the latter interpretation is clearly preferable, because it is so much simpler while being equally adequate, fruitful, and so on. (Lecture 3)

That is, one way my self-views can be efficient representations is if they serve double duty as others’ views of me — if my self-views borrow from others’ views of me and if my models of others’ views of me likewise borrow from my self-views.

Sometimes this back and forth between my self-view and my understanding of how others’ view me can seem counter to self-interest. People behave in ways that confirm others’ expectations of them, even when these expectations are negative (Snyder & Swann, 1978, for a review see Snyder & Stukas, 1999). And people interact with other people in ways such that their self-views are not challenged by others’ views of them and their self-views can double as representations of the others’ views of them, even when this means taking other people as having negative views of them (Swann, 1981).

Self-verification and behavioral confirmation strategies

People use multiple different strategies for achieving a match between their self-views and others’ view of them. These strategies come in at different stages of social interaction.

Prior to and in anticipation of interaction, people seek and more thoroughly engage with information and people with self-views expected to be consistent with their self-views. For example, they spend more time reading statements about themselves that they expect to be consistent with their self-views — even if those particular self-views are negative.

During interaction, people behave in ways that elicit views of them from others that are consistent with their self-views. This is especially true when their self-views are being challenged, say because someone expresses a positive view of an aspect of a person who sees that aspect of themselves negatively. People can “go out of their way” to behave in ways that elicit negative self-views. On the other hand, people can change their self-views and their behavior to match the expectations of others; this primarily happens when a person’s view of a particular aspect of themselves is one they do not regard as certain.

After interaction, people better remember expressions of others’ views of them that are consistent with their own. They also can think about others’ views that were inconsistent in ways that construe them as non-conflicting. On the long term, people gravitate to others’ — including friends and spouses — who view them as they view themselves. Likewise, people seem to push away others who have different views of them.

Do people self-verify in interacting with computers?

Given that people engage in this array of self-verification strategies in interactions with other people, we might expect that they would do the same in interacting with computers, including mobile phones, on-screen agents, voices, and services.

One reason to think that people do self-verify in human–computer interaction is that people respond to computers in a myriad of social ways: people reciprocate with computers, take on computers as teammates, treat computer personalities like human personalities, etc. (for a review see Nass & Moon, 2000). So I expect that people use these same strategies when using interactive technologies — including personal computers, mobile phones, robots, cars, online services, etc.

While empirical research should be carried out to test this basic, well-motivated hypothesis, there is further excitement and importance to the broader implications of this idea and its connections to how people understand new technological systems.

When systems models users

Since the 1980s, it has been quite common for system designers to think about the mental models people have of systems — and how these models are shaped by factors both in and out of the designer’s control (Gentner & Stevens, 1983). A familiar goal has been to lead people to a mental model that “matches” a conceptual model developed by the designer and is approximately equivalent to a true system model as far as common inputs and outputs go.

Many interactive systems develop a representation of their users. So in order to have a good mental model of these systems, people must represent how the system views them. This involves many of the same trade-offs considered above.

These considerations point out some potential problems for such systems. Technologists sometimes talk about the ability to provide serendipitous discovery. Quantifying aspects of one’s own life — including social behavior (e.g., Kass, 2007) and health — is a current trend in research, product development, and DIY and self-experimentation. While sometimes this collected data is then analyzed by its subject (e.g., because the subject is a researcher or hacker who just wants to dig into the data), to the extend that this trend will go mainstream, it will require simplification by building and presenting readily understandable models and views of these systems’ users.

The use of self-verification strategies and behavioral confirmation when interacting with computer systems — not only with people — thus presents a challenge to the ability of such systems to find users who are truly open to self-discovery. I think many of these same ideas apply equally to context-aware services on mobile phones and services that models one’s social network (even if they don’t  present that model outright).

Social responses or more general confirmation bias

That people may self-verify with computers as well as people raises a further question about both self-verification theory and social responses to communication technologies theory (aka the “Media Equation”). We may wonder just how general these strategies and responses are: are these strategies and responses distinctively social?

Prior work on self-verification has left open the degree to which self-verification strategies are particular to self-views, rather than general to all relatively important and confident beliefs and attitudes. Likewise, it is unclear to what extent all experiences, rather than just social interaction (including reading statements written or selected by another person), that might challenge or confirm a self-view are subject to these self-verification strategies.

Inspired by Velleman’s description above, we can think that it is just that other’s views of us have an dangerous potential to result in an explosion of the complexity of the world we need to model (“I have a way of interpreting myself, a way that I want you to interpret me, a way that I think you do interpret me, a way that I think you suspect me of wanting you to interpret me, a way that I think you suspect me of thinking you do interpret me, and so on”). Thus, if other systems can prompt this same regress, then the same frugality with our cognitions should lead to self-verification and behavioral confirmation. This is a reminder that treating media like real life, including treating computers like people, is not clearly non-adaptive (contra Reeves & Nass, 1996) or maladaptive (contra Lee, 2004).

References

Gentner, D., & Stevens, A. L. (1983). Mental Models. Lawrence Erlbaum Associates.

Kass, A. (2007). Transforming the Mobile Phone into a Personal Performance Coach. In B. J. Fogg & D. Eckles (Eds.), Mobile Persuasion: 20 Perspectives on the Future of Behavior Change. Stanford Captology Media.

Lee, K. M. (2004). Why Presence Occurs: Evolutionary Psychology, Media Equation, and Presence. Presence: Teleoperators & Virtual Environments, 13(4), 494-505. doi: 10.1162/1054746041944830.

Nass, C., & Moon, Y. (2000). Machines and Mindlessness: Social Responses to Computers. Journal of Social Issues, 56(1), 81-103.

Reeves, B., & Nass, C. (1996). The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places. Cambridge University Press.

Snyder, M., & Stukas, A. A. (1999). Interpersonal processes: The interplay of cognitive, motivational, and behavioral activities in social interaction. Annual Review of Psychology, 50(1), 273-303.

Snyder, M., & Swann, W. B. (1978). Behavioral confirmation in social interaction: From social perception to social reality. Journal of Experimental Social Psychology, 14(2), 148-62.

Swann, W. B., & Read, S. J. (1981). Self-verification processes: How we sustain our self-conceptions. Journal of Experimental Social Psychology, 17(4), 351-372. doi: 10.1016/0022-1031(81)90043-3

Velleman, J.D. (2009). How We Get Along. Cambridge University Press. The draft I quote is available from http://ssrn.com/abstract=1008501

Using social networks for persuasion profiling

BusinessWeek has an exhuberant review of current industry research and product development related to understanding social networks using data from social network sites and other online communication such as email. It includes snippets from people doing very interesting social science research, like Duncan Watts, Cameron Marlow, and danah boyd. So it is worth checking out, even if you’re already familiar with the Facebook Data Team’s recent public reports (“Maintained Relationships”, “Gesundheit!”).

But I actually want to comment not on their comments, but on this section:

In an industry where the majority of ads go unclicked, even a small boost can make a big difference. One San Francisco advertising company, Rapleaf, carried out a friend-based campaign for a credit-card company that wanted to sell bank products to existing customers. Tailoring offers based on friends’ responses helped lift the average click rate from 0.9% to 2.7%. Although 97.3% of the people surfed past the ads, the click rate still tripled.

Rapleaf, which has harvested data from blogs, online forums, and social networks, says it follows the network behavior of 480 million people. It furnishes friendship data to help customers fine-tune their promotions. Its studies indicate borrowers are a better bet if their friends have higher credit ratings. This might mean a home buyer with a middling credit risk score of 550 should be treated as closer to 600 if most of his or her friends are in that range, says Rapleaf CEO Auren Hoffman.

The idea is that since you are more likely to behave like your friends, their behavior can be used to profile you and tailor some marketing to be more likely to result in compliance.

In the Persuasive Technology Lab at Stanford University, BJ Fogg has long emphasized how powerful and worrying personalization based on this kind of “persuasion profile” can be. Imagine that rather than just personalizing screens based on the books you are expected to like (a familiar idea), Amazon selects the kinds of influence strategies used based on a representation of what strategies work best against you: “Dean is a sucker for limited-time offers”, “Foot-in-the-door works really well against Domenico, especially when he is buying a gift.”

In 2006 two of our students, Fred Leach and Schuyler Kaye, created this goofy video illustrating approximately this concept:

My sense is that this kind of personalization is in wide use at places like Amazon, except that their “units of analysis/personalization” are individual tactics (e.g., Gold Box offers), rather than the social influence strategies that can be implemented in many ways and in combination with each other.

What’s interesting about the Rapleaf work described by BusinessWeek is that this enables persuasion profiling even before a service provider or marketer knows anything about you — except that you were referred by or are otherwise connected to a person. This gives them the ability to estimate your persuasion profile by using your social neighborhood, even if you haven’t disclosed this information about your social network.

While there has been some research on individual differences in responses to influence strategies (including when used by computers), as far as I know there isn’t much work on just how much the responses of friends covary. As a tool for influencers online, it doesn’t matter as much whether this variation explained by friends’ responses is also explained by other variables, as long as those variables aren’t available for the influencers to collect. But for us social scientists, it would be interesting to understand the mechanism by which there is this relationship: is it just that friends are likely to be similar in a bunch of ways and these predict our “persuasion profiles”, or are the processes of relationship creation that directly involve these similarities.

This is an exciting and scary direction, and I want to learn more about it.

Etching by Da Vinci? Representing legend, culture, and language

A photo I took in Piazza della Signoria
A photo I took in Piazza della Signoria of an etching, reportedly a self-portrait of Leonardo da Vinci that he etched behind his back on a dare onto the side of the Palazzo Vecchio.

Is this etching a self-portrait by Leonardo da Vinci created hundreds of years ago? That’s what I was told by a Californian friend who had “gone native” in Florence. Another matter: is this, in fact, a commonly believed and shared legend, and what other variations are there on it?

I shared the story with some fellow visitors in Florence on a lunch-time return to the piazza. Ed Chi tried to verify the rumor using a Web search, but with no success.  At least in English, there didn’t seem to be much on this in the Web. (See my photo and comments on Flickr.)

I posted the photo on Flickr. I asked questions on LinkedIn and Yahoo! Answers, with no success. I also asked for help from workers on Mechanical Turk. Here’s part of how I asked for help:

There is a portrait etched in stone on the wall of Palazzo Vecchio in Piazza della Signoria in Florence (Firenza), Italy. It is close behind the copy of the David there. I have heard that there is a legend that this is a self-portrait by Leonardo da Vinci. I am looking for any information about this legend, alternate versions of the legend, or information about the real source of the portrait.

What results have been offered seem to suggest that this legend exists — though perhaps it is “actually” (at least as captured online, since perhaps the Leonardo theorists aren’t as active digital content creators) about Michelangelo:

The best way of finding out seemed to actually be my Flickr photo itself, since that’s where Daniel Witting provided the first two links above — however, this was a few months after the photo was first posted to Flickr. Turkers provided a couple useful links also (“Curiosities” above) on a shorter schedule and with a higher price. (I should have also tried uClue — where many former Google Answers researchers now work. This was recommended by Max Harper, who has studied Q&A sites in detail.)

Question and answer services along the lines of Yahoo! Answers rose to global (and U.S.) significance only after success in Korea, where Naver Knowledge iN pioneered the use of an online community to power a Q&A site. A major motivation Korea was the limited amount of Korean content online. With Naver’s offering, Korea’s Internet saavy, English population made information newly available in Korean (and did plenty of other interesting work).

This is as significant a motivation for Q&A sites by English-speaking folks in the U.S., but the present case is an exception.

Some of the questions that made this case interesting to me:

  • What culturally-shared beliefs get manifest online? During this whole process, I and others wondered whether perhaps this local legend was only shared orally. It seems that it is represented online after all — at least the Michelangelo variant, but it could have been otherwise.
  • How does the pair of languages a task requires knowledge of determine the processes, structres, and communities that are optimal for completing the task? For example, it seems quite important whether the target or source language has many more speakers than the other. (One could think about this simplistically in terms of conditional probabilities of skills with language A given skill with language B and vice verse.)

Being a lobster and using a hammer: “homuncular flexibility” and distal attribution

Jaron Lanier (2006) calls the ability of humans to learn to control virtual bodies that are quite different than our own “homuncular flexibility”. This is, for him, a dangerous idea. The idea is that the familiar mapping of the body represented in the cortical homunculus is only one option – we can flexibly act (and perceive) using quite other mappings, e.g., to virtual bodies. Your body can be tracked, and these movements can be used to control a lobster in virtual reality – just as one experiences (via head-mounted display, haptic feedback, etc.) the virtual space from the perspective of the lobster under your control.

This name and description makes this sound quite like science fiction. In this post, I assimilate homuncular flexibility to the much more general phenomenon of distal attribution (Loomis, 1992; White, 1970). When I have a perceptual experience, I can just as well attribute that experience – and take it as being directed at or about – more proximal or distal phenomena. For example, I can attribute it to my sensory surface, or I can attribute it to a flower in the distance. White (1970) proposed that more distal attribution occurs when the afference (perception) is lawfully related to efference (action) on the proximal side of that distal entity. That is, if my action and perception are lawfully related on “my side” of that entity in the causal tree, then I will make attributions to that entity. Loomis (1992) adds the requirement that this lawful relationship be successfully modeled. This is close, but not quite right, for if I can make distal attributions even in the absence of an actual lawful relationship that I successfully model, my (perhaps inaccurate) modeling of a (perhaps non-existent) lawful relationship will do just fine.

Just as I attribute a sensory experience to a flower and not the air between me and the flower, so the blind man or the skilled hammer-user can attribute a sensory experience to the ground or the nail, rather than the handle of the cane or hammer. On consideration, I think we can see that these phenomena are very much what Lanier is talking about. When I learn to operate (and, not treated by Lanier, 2006, sense) my lobster-body, it is because I have modeled an efference–afference relationship, yielding a kind of transparency. This is a quite familiar sort of experience. It might still be a quite dangerous or exciting idea, but its examples are ubiquitous, not restricted to virtual reality labs.

Lanier paraphrases biologist Jim Boyer as counting this capability as a kind of evolutionary artifact – a spandrel in the jargon of evolutionary theory. But I think a much better just-so evolutionary story can be given: it is this capability – to make distal attributions to the limits of the efference–afference relationships we successfully model – that makes us able to use tools so effectively. At an even more basic and general level, it is this capability that makes it possible for us to communicate meaningfully: our utterances have their meaning in the context of triangulating with other people such that the content of what we are saying is related to the common cause of both of our perceptual experiences (Davidson, 1984).

References

Davidson, D. (1984). Inquiries into Truth and Interpretation. Oxford: Clarendon Press.

Lanier, J. (2006). Homuncular flexibility. Edge.

Loomis, J. M. (1992). Distal attribution and presence. Presence: Teleoperators and Virtual Environments, 1(1), 113-119.

White, B. W. (1970). Perceptual findings with the vision-substitution system. IEEE Transactions on Man-Machine Systems, 11(1), 54-58.