Ready-to-hand

Dean Eckles on people, technology & inference

psychology

Self-verification strategies in human–computer interaction

People believe many things about themselves. Having an accurate view of oneself is valuable because it can be used to generate both expectations that will be fulfilled and plans that can be successfully executed. But in being cognitively limited agents, there is pressure for us humans to not only have accurate self-views, but to have efficient ones.

In his new book, How We Get Along, philosopher David Velleman puts it this way:

At one extreme, I have a way of interpreting myself, a way that I want you to interpret me, a way that I think you do interpret me, a way that I think you suspect me of wanting you to interpret me, a way that I think you suspect me of thinking you do interpret me, and so on, each of these interpretations being distinct from all the others, and all of them being somehow crammed into my self-conception. At the other extreme, there is just one interpretation of me, which is common property between us, in that we not only hold it but interpret one another as holding it, and so on. If my goal is understanding, then the latter interpretation is clearly preferable, because it is so much simpler while being equally adequate, fruitful, and so on. (Lecture 3)

That is, one way my self-views can be efficient representations is if they serve double duty as others’ views of me — if my self-views borrow from others’ views of me and if my models of others’ views of me likewise borrow from my self-views.

Sometimes this back and forth between my self-view and my understanding of how others’ view me can seem counter to self-interest. People behave in ways that confirm others’ expectations of them, even when these expectations are negative (Snyder & Swann, 1978, for a review see Snyder & Stukas, 1999). And people interact with other people in ways such that their self-views are not challenged by others’ views of them and their self-views can double as representations of the others’ views of them, even when this means taking other people as having negative views of them (Swann, 1981).

Self-verification and behavioral confirmation strategies

People use multiple different strategies for achieving a match between their self-views and others’ view of them. These strategies come in at different stages of social interaction.

Prior to and in anticipation of interaction, people seek and more thoroughly engage with information and people with self-views expected to be consistent with their self-views. For example, they spend more time reading statements about themselves that they expect to be consistent with their self-views — even if those particular self-views are negative.

During interaction, people behave in ways that elicit views of them from others that are consistent with their self-views. This is especially true when their self-views are being challenged, say because someone expresses a positive view of an aspect of a person who sees that aspect of themselves negatively. People can “go out of their way” to behave in ways that elicit negative self-views. On the other hand, people can change their self-views and their behavior to match the expectations of others; this primarily happens when a person’s view of a particular aspect of themselves is one they do not regard as certain.

After interaction, people better remember expressions of others’ views of them that are consistent with their own. They also can think about others’ views that were inconsistent in ways that construe them as non-conflicting. On the long term, people gravitate to others’ — including friends and spouses — who view them as they view themselves. Likewise, people seem to push away others who have different views of them.

Do people self-verify in interacting with computers?

Given that people engage in this array of self-verification strategies in interactions with other people, we might expect that they would do the same in interacting with computers, including mobile phones, on-screen agents, voices, and services.

One reason to think that people do self-verify in human–computer interaction is that people respond to computers in a myriad of social ways: people reciprocate with computers, take on computers as teammates, treat computer personalities like human personalities, etc. (for a review see Nass & Moon, 2000). So I expect that people use these same strategies when using interactive technologies — including personal computers, mobile phones, robots, cars, online services, etc.

While empirical research should be carried out to test this basic, well-motivated hypothesis, there is further excitement and importance to the broader implications of this idea and its connections to how people understand new technological systems.

When systems models users

Since the 1980s, it has been quite common for system designers to think about the mental models people have of systems — and how these models are shaped by factors both in and out of the designer’s control (Gentner & Stevens, 1983). A familiar goal has been to lead people to a mental model that “matches” a conceptual model developed by the designer and is approximately equivalent to a true system model as far as common inputs and outputs go.

Many interactive systems develop a representation of their users. So in order to have a good mental model of these systems, people must represent how the system views them. This involves many of the same trade-offs considered above.

These considerations point out some potential problems for such systems. Technologists sometimes talk about the ability to provide serendipitous discovery. Quantifying aspects of one’s own life — including social behavior (e.g., Kass, 2007) and health — is a current trend in research, product development, and DIY and self-experimentation. While sometimes this collected data is then analyzed by its subject (e.g., because the subject is a researcher or hacker who just wants to dig into the data), to the extend that this trend will go mainstream, it will require simplification by building and presenting readily understandable models and views of these systems’ users.

The use of self-verification strategies and behavioral confirmation when interacting with computer systems — not only with people — thus presents a challenge to the ability of such systems to find users who are truly open to self-discovery. I think many of these same ideas apply equally to context-aware services on mobile phones and services that models one’s social network (even if they don’t  present that model outright).

Social responses or more general confirmation bias

That people may self-verify with computers as well as people raises a further question about both self-verification theory and social responses to communication technologies theory (aka the “Media Equation”). We may wonder just how general these strategies and responses are: are these strategies and responses distinctively social?

Prior work on self-verification has left open the degree to which self-verification strategies are particular to self-views, rather than general to all relatively important and confident beliefs and attitudes. Likewise, it is unclear to what extent all experiences, rather than just social interaction (including reading statements written or selected by another person), that might challenge or confirm a self-view are subject to these self-verification strategies.

Inspired by Velleman’s description above, we can think that it is just that other’s views of us have an dangerous potential to result in an explosion of the complexity of the world we need to model (“I have a way of interpreting myself, a way that I want you to interpret me, a way that I think you do interpret me, a way that I think you suspect me of wanting you to interpret me, a way that I think you suspect me of thinking you do interpret me, and so on”). Thus, if other systems can prompt this same regress, then the same frugality with our cognitions should lead to self-verification and behavioral confirmation. This is a reminder that treating media like real life, including treating computers like people, is not clearly non-adaptive (contra Reeves & Nass, 1996) or maladaptive (contra Lee, 2004).

References

Gentner, D., & Stevens, A. L. (1983). Mental Models. Lawrence Erlbaum Associates.

Kass, A. (2007). Transforming the Mobile Phone into a Personal Performance Coach. In B. J. Fogg & D. Eckles (Eds.), Mobile Persuasion: 20 Perspectives on the Future of Behavior Change. Stanford Captology Media.

Lee, K. M. (2004). Why Presence Occurs: Evolutionary Psychology, Media Equation, and Presence. Presence: Teleoperators & Virtual Environments, 13(4), 494-505. doi: 10.1162/1054746041944830.

Nass, C., & Moon, Y. (2000). Machines and Mindlessness: Social Responses to Computers. Journal of Social Issues, 56(1), 81-103.

Reeves, B., & Nass, C. (1996). The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places. Cambridge University Press.

Snyder, M., & Stukas, A. A. (1999). Interpersonal processes: The interplay of cognitive, motivational, and behavioral activities in social interaction. Annual Review of Psychology, 50(1), 273-303.

Snyder, M., & Swann, W. B. (1978). Behavioral confirmation in social interaction: From social perception to social reality. Journal of Experimental Social Psychology, 14(2), 148-62.

Swann, W. B., & Read, S. J. (1981). Self-verification processes: How we sustain our self-conceptions. Journal of Experimental Social Psychology, 17(4), 351-372. doi: 10.1016/0022-1031(81)90043-3

Velleman, J.D. (2009). How We Get Along. Cambridge University Press. The draft I quote is available from http://ssrn.com/abstract=1008501

Using social networks for persuasion profiling

BusinessWeek has an exhuberant review of current industry research and product development related to understanding social networks using data from social network sites and other online communication such as email. It includes snippets from people doing very interesting social science research, like Duncan Watts, Cameron Marlow, and danah boyd. So it is worth checking out, even if you’re already familiar with the Facebook Data Team’s recent public reports (“Maintained Relationships”, “Gesundheit!”).

But I actually want to comment not on their comments, but on this section:

In an industry where the majority of ads go unclicked, even a small boost can make a big difference. One San Francisco advertising company, Rapleaf, carried out a friend-based campaign for a credit-card company that wanted to sell bank products to existing customers. Tailoring offers based on friends’ responses helped lift the average click rate from 0.9% to 2.7%. Although 97.3% of the people surfed past the ads, the click rate still tripled.

Rapleaf, which has harvested data from blogs, online forums, and social networks, says it follows the network behavior of 480 million people. It furnishes friendship data to help customers fine-tune their promotions. Its studies indicate borrowers are a better bet if their friends have higher credit ratings. This might mean a home buyer with a middling credit risk score of 550 should be treated as closer to 600 if most of his or her friends are in that range, says Rapleaf CEO Auren Hoffman.

The idea is that since you are more likely to behave like your friends, their behavior can be used to profile you and tailor some marketing to be more likely to result in compliance.

In the Persuasive Technology Lab at Stanford University, BJ Fogg has long emphasized how powerful and worrying personalization based on this kind of “persuasion profile” can be. Imagine that rather than just personalizing screens based on the books you are expected to like (a familiar idea), Amazon selects the kinds of influence strategies used based on a representation of what strategies work best against you: “Dean is a sucker for limited-time offers”, “Foot-in-the-door works really well against Domenico, especially when he is buying a gift.”

In 2006 two of our students, Fred Leach and Schuyler Kaye, created this goofy video illustrating approximately this concept:

My sense is that this kind of personalization is in wide use at places like Amazon, except that their “units of analysis/personalization” are individual tactics (e.g., Gold Box offers), rather than the social influence strategies that can be implemented in many ways and in combination with each other.

What’s interesting about the Rapleaf work described by BusinessWeek is that this enables persuasion profiling even before a service provider or marketer knows anything about you — except that you were referred by or are otherwise connected to a person. This gives them the ability to estimate your persuasion profile by using your social neighborhood, even if you haven’t disclosed this information about your social network.

While there has been some research on individual differences in responses to influence strategies (including when used by computers), as far as I know there isn’t much work on just how much the responses of friends covary. As a tool for influencers online, it doesn’t matter as much whether this variation explained by friends’ responses is also explained by other variables, as long as those variables aren’t available for the influencers to collect. But for us social scientists, it would be interesting to understand the mechanism by which there is this relationship: is it just that friends are likely to be similar in a bunch of ways and these predict our “persuasion profiles”, or are the processes of relationship creation that directly involve these similarities.

This is an exciting and scary direction, and I want to learn more about it.

Being a lobster and using a hammer: “homuncular flexibility” and distal attribution

Jaron Lanier (2006) calls the ability of humans to learn to control virtual bodies that are quite different than our own “homuncular flexibility”. This is, for him, a dangerous idea. The idea is that the familiar mapping of the body represented in the cortical homunculus is only one option – we can flexibly act (and perceive) using quite other mappings, e.g., to virtual bodies. Your body can be tracked, and these movements can be used to control a lobster in virtual reality – just as one experiences (via head-mounted display, haptic feedback, etc.) the virtual space from the perspective of the lobster under your control.

This name and description makes this sound quite like science fiction. In this post, I assimilate homuncular flexibility to the much more general phenomenon of distal attribution (Loomis, 1992; White, 1970). When I have a perceptual experience, I can just as well attribute that experience – and take it as being directed at or about – more proximal or distal phenomena. For example, I can attribute it to my sensory surface, or I can attribute it to a flower in the distance. White (1970) proposed that more distal attribution occurs when the afference (perception) is lawfully related to efference (action) on the proximal side of that distal entity. That is, if my action and perception are lawfully related on “my side” of that entity in the causal tree, then I will make attributions to that entity. Loomis (1992) adds the requirement that this lawful relationship be successfully modeled. This is close, but not quite right, for if I can make distal attributions even in the absence of an actual lawful relationship that I successfully model, my (perhaps inaccurate) modeling of a (perhaps non-existent) lawful relationship will do just fine.

Just as I attribute a sensory experience to a flower and not the air between me and the flower, so the blind man or the skilled hammer-user can attribute a sensory experience to the ground or the nail, rather than the handle of the cane or hammer. On consideration, I think we can see that these phenomena are very much what Lanier is talking about. When I learn to operate (and, not treated by Lanier, 2006, sense) my lobster-body, it is because I have modeled an efference–afference relationship, yielding a kind of transparency. This is a quite familiar sort of experience. It might still be a quite dangerous or exciting idea, but its examples are ubiquitous, not restricted to virtual reality labs.

Lanier paraphrases biologist Jim Boyer as counting this capability as a kind of evolutionary artifact – a spandrel in the jargon of evolutionary theory. But I think a much better just-so evolutionary story can be given: it is this capability – to make distal attributions to the limits of the efference–afference relationships we successfully model – that makes us able to use tools so effectively. At an even more basic and general level, it is this capability that makes it possible for us to communicate meaningfully: our utterances have their meaning in the context of triangulating with other people such that the content of what we are saying is related to the common cause of both of our perceptual experiences (Davidson, 1984).

References

Davidson, D. (1984). Inquiries into Truth and Interpretation. Oxford: Clarendon Press.

Lanier, J. (2006). Homuncular flexibility. Edge.

Loomis, J. M. (1992). Distal attribution and presence. Presence: Teleoperators and Virtual Environments, 1(1), 113-119.

White, B. W. (1970). Perceptual findings with the vision-substitution system. IEEE Transactions on Man-Machine Systems, 11(1), 54-58.

Situational variation, attribution, and human-computer relationships

Mobile phones are gateways to our most important and enduring relationships with other people. But, like other communication technologies, the mobile phone is psychologically not only a medium: we also form enduring relationships with devices themselves and their  associated software and services (Sundar 2004). While different than  relationships with other people, these human–technology relationships are also importantly social relationships. People exhibit a host of automatic, social responses to interactive  technologies by applying familiar social rules, categories, and norms that are otherwise used in interacting with people (Reeves and Nass 1996; Nass and Moon 2000).

These human–technology relationships develop and endure over time and through radical changes in the situation. In particular, mobile phones are near-constant companions. They take on roles of both medium for communication with other people and independent interaction partner through dynamic physical, social, and cultural environments and tasks. The global phenomenon of mobile phone use highlights both that relationships with people and technologies are influenced by variable context and that these devices are, in some ways, a constant in amidst these everyday changes.

Situational variation and attribution

Situational variation is important for how people understand and interact with mobile technology. This variation is an input to the processes by which people disentangle the internal (personal or device) and external (situational) causes of an social entity’s behavior (Fiedler et al. 1999; Forsterling 1992; Kelley 1967), so this situational variation contributes to the traits and states attributed to human and technological entities. Furthermore, situational variation influences the relationship and interaction in other ways. For example, we have recently carried out an experiment providing evidence that this situational variation itself (rather than the characteristics of the situations) influences memory, creativity, and self-disclosure to a mobile service; in particular, people disclose more in places they have previously disclosed to the service, than in  new places (Sukumaran et al. 2009).

Not only does the situation vary, but mobile technologies are increasingly responsive to the environments they share with their human interactants. A system’s systematic and purposive responsiveness to the environment means means that explaining its behavior is about more than distinguishing internal and external causes: people explain behavior by attributing reasons to the entity, which may trivially either refer to internal or external causes. For example, contrast “Jack bought the house because it was secluded” (external) with “Jack bought the house because he wanted privacy” (internal) (Ross 1977, p. 176). Much research in the social cognition and attribution theory traditions of psychology has failed to address this richness of people’s everyday explanations of other ’s behavior (Malle 2004; McClure 2002), but contemporary, interdisciplinary work is elaborating on theories and methods from philosophy and developmental psychology to this end (e.g., the contributions to Malle et al. 2001).

These two developments — the increasing role of situational variation in human-technology relationships and a new appreciation of the richness of everyday explanations of behavior — are important to consider together in designing new research in human-computer interaction, psychology, and communication. Here are three suggestions about directions to pursue in light of this:

Design systems that provide constancy and support through radical situational changes in both the social and physical environment. For example, we have created a system that uses the voices of participants in an upcoming event as audio primes during transition periods (Sohn et al. 2009). This can help ease the transition from a long corporate meeting to a chat with fellow parents at a child’s soccer game.

Design experimental manipulations and measure based on features of folk psychology —  the implicit theory or capabilities by which we attribute, e.g., beliefs, thoughts, and desires (propositional attitudes) to others (Dennett 1987) — identified by philosophers. For example, attributions propositional attitudes (e.g., beliefs) to an entity have the linguistic feature that one cannot substitute different terms that refer to the same object while maintaining the truth or appropriateness of the statement. This opacity in attributions of propositional attitudes is the subject of a large literature (e.g., following Quine 1953), but this  has not been used as a lens for much empirical work, except for some developmental psychology  (e.g., Apperly and Robinson 2003). Human-computer interaction research should use this opacity (and other underused features of folk psychology) in studies of how people think about systems.

Connect work on mental models of systems (e.g., Kempton 1986; Norman 1988) to theories of social cognition and folk psychology. I think we can expect much larger overlap in the process involved than in the current research literature: people use folk psychology to understand, predict, and explain technological systems — not just other people.

References

Apperly, I. A., & Robinson, E. J. (2003). When can children handle referential opacity? Evidence for systematic variation in 5- and 6-year-old children’s reasoning about beliefs and belief reports. Journal of Experimental Child Psychology, 85(4), 297-311. doi: 10.1016/S0022-0965(03)00099-7.

Dennett, D. C. (1987). The Intentional Stance (p. 388). MIT Press.

Fiedler, K., Walther, E., & Nickel, S. (1999). Covariation-based attribution: On the ability to assess multiple covariates of an effect. Personality and Social Psychology Bulletin, 25(5), 609.

Försterling, F. (1992). The Kelley model as an analysis of variance analogy: How far can it be taken? Journal of Experimental Social Psychology, 28(5), 475-490. doi: 10.1016/0022-1031(92)90042-I.

Kelley, H. H. (1967). Attribution theory in social psychology. In Nebraska Symposium on Motivation (Vol. 15).

Malle, B. F. (2004). How the Mind Explains Behavior: Folk Explanations, Meaning, and Social Interaction. Bradford Books.

Malle, B. F., Moses, L. J., & Baldwin, D. A. (2001). Intentions and Intentionality: Foundations of Social Cognition. MIT Press.

McClure, J. (2002). Goal-Based Explanations of Actions and Outcomes. In M. H. Wolfgang Stroebe (Ed.), European Review of Social Psychology (pp. 201-235). John Wiley & Sons, Inc. Retrieved from http://dx.doi.org/10.1002/0470013478.ch7.

Nass, C., & Moon, Y. (2000). Machines and Mindlessness: Social Responses to Computers. Journal of Social Issues, 56(1), 81-103.

Norman, D. A. (1988). The Psychology of Everyday Things. New York: Basic Books.

Quine, W. V. O. (1953). From a Logical Point of View: Nine Logico-Philosophical Essays. Harvard University Press.

Reeves, B., & Nass, C. (1996). The media equation: how people treat computers, television, and new media like real people and places (p. 305). Cambridge University Press.

Ross, L. (1977). The intuitive psychologist and his shortcomings: Distortions in the attribution process. In L. Berkowitz (Ed.), Advances in Experimental Social Psychology (Vol. 10, pp. 174-221). New York: Academic Press.

Sohn, T., Takayama, L., Eckles, D., & Ballagas, R. (2009). Auditory Priming for Upcoming Events. Forthcoming in CHI ’09 extended abstracts on Human factors in computing systems. Boston, Massachusetts, United States: ACM Press.

Sukumaran, A., Ophir, E., Eckles, D., & Nass, C. I. (2009). Variable Environments in Mobile Interaction Aid Creativity but Impair Learning and Self-disclosure. To be presented at the Association for Psychological Science Convention, San Francisco, California.

Sundar, S. S. (2004). Loyalty to computer terminals: is it anthropomorphism or consistency? Behaviour & Information Technology, 23(2), 107-118.

Activity streams, personalization, and beliefs about our social neighborhood

Every person who logs into Facebook is met with the same interface but with personalized content. This interface is News Feed, which lists “news stories” generated by users’ Facebook friend. These news stories include the breaking news that Andrew was just tagged in a photo, that Neema declared he is a fan of a particular corporation, that Ellen joined a group expressing support for a charity, and that Alan says, “currently enjoying an iced coffee… anyone want to see a movie tonight?”

News Feed is an example of a particular design pattern that has recently become quite common – the activity stream. An activity stream aggregates actions of a set of individuals – such as a person’s egocentric social network – and displays the recent and/or interesting ones.

I’ve previously analysed, in a more fine-grained analysis of a particular (and now changed) interface element for setting one’s Facebook status message, how activity streams bias our beliefs about the frequency of others’ participation on social network services (SNSs). It works like this:

  • We use availability to mind as a heuristic for estimating probability and frequency (Kahneman & Tversky, 1973). So if it is easier to think of a possibility, we judge it to be more likely or frequent. This heuristic is often helpful, but it also leads to bias due to, e.g., recent experience, search strategy (compare thinking of words starting with ‘r’ versus words with ‘r’ as the third letter).
  • Activity streams show a recent subset of the activity available (think for now of a simple activity stream, like that on one’s Twitter home page).
  • Activity streams show activity that is more likely to be interesting and is more likely to have comments on it.

Through the availability heuristic (and other mechanisms) this leads to one to estimate that (1) people in one’s egocentric network are generating activity on Facebook more frequently than they actually are and (2) stories with particular characteristics (e.g., comments on them) are more (or less) common in one’s egocentric network than they actually are.

Personalized cultivation

When thinking about this in the larger picture, one can see this as a kind of cultivation effect of algorithmic selection processes in interpersonal media. According to cultivation theory (see Williams, 2006, for an application to MMORGs), our long-term exposure to media makes leads us to see the real world through the lens of the media world; this exposure gradually results in beliefs about the world based on the systematic distortions of the media world (Gerbner et al., 1980). For example, heavy television viewing predicts giving more “television world” answers to questions — overestimating the frequency of men working in law enforcement and the probability of experiencing violent acts. A critical difference here is that with activity streams, similar cultivation can occur with regard to our local social and cultural neighborhood.

Aims of personalization

Automated personalization has traditionally focused on optimizing for relevance – keep users looking, get them clicking for more information, and make them participate related to this relevant content. But the considerations here highlight another goal of personalization: personalization for strategic influence on attitudes that matter for participation. These goals can be in tension. For example, should the system present…

The most interesting and relevant photos to a user?

Showing photographs from a user’s network that have many views and comments may result in showing photos that are very interesting to the user. However, seeing these photos can lead to inaccurate beliefs about how common different kinds of photos are (for example, overestimating the frequency of high-quality, artistic photos and underestimating the frequency of “poor-quality” cameraphone photos). This can discourage participation through perceptions of the norms for the network or the community.

On the other hand, seeing photos with so many comments or views may lead to overestimating how many comments one is likely to get on one’s own photo; this can result in disappointment following participation.

Activity from a user’s closest friends?

Assume that activity from close friends is more likely to be relevant and interesting. It might even be more likely to prompt participation, particularly in the form of comments and replies. But it can also bias judgments of likely audience: all those people I don’t know so well are harder to bring to mind as is, but if they don’t appear much in the activity stream for my network, I’m less likely to consider them when creating my content. This could lead to greater self-disclosure, bad privacy experiences, poor identity management, and eventual reduction in participation.

References

Gerbner, G., Gross, L., Morgan, M., & Signorielli, N. (1980). The “Mainstreaming” of America: Violence Profile No. 11. Journal of Communication, 30(3), 10-29.

Kahneman, D., & Tversky, A. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5, 207-232.

Williams, D. (2006). Virtual Cultivation: Online Worlds, Offline Perceptions. Journal of Communication, 56, 69-87.

Scroll to top