Self-verification strategies in human–computer interaction

People believe many things about themselves. Having an accurate view of oneself is valuable because it can be used to generate both expectations that will be fulfilled and plans that can be successfully executed. But in being cognitively limited agents, there is pressure for us humans to not only have accurate self-views, but to have efficient ones.

In his new book, How We Get Along, philosopher David Velleman puts it this way:

At one extreme, I have a way of interpreting myself, a way that I want you to interpret me, a way that I think you do interpret me, a way that I think you suspect me of wanting you to interpret me, a way that I think you suspect me of thinking you do interpret me, and so on, each of these interpretations being distinct from all the others, and all of them being somehow crammed into my self-conception. At the other extreme, there is just one interpretation of me, which is common property between us, in that we not only hold it but interpret one another as holding it, and so on. If my goal is understanding, then the latter interpretation is clearly preferable, because it is so much simpler while being equally adequate, fruitful, and so on. (Lecture 3)

That is, one way my self-views can be efficient representations is if they serve double duty as others’ views of me — if my self-views borrow from others’ views of me and if my models of others’ views of me likewise borrow from my self-views.

Sometimes this back and forth between my self-view and my understanding of how others’ view me can seem counter to self-interest. People behave in ways that confirm others’ expectations of them, even when these expectations are negative (Snyder & Swann, 1978, for a review see Snyder & Stukas, 1999). And people interact with other people in ways such that their self-views are not challenged by others’ views of them and their self-views can double as representations of the others’ views of them, even when this means taking other people as having negative views of them (Swann, 1981).

Self-verification and behavioral confirmation strategies

People use multiple different strategies for achieving a match between their self-views and others’ view of them. These strategies come in at different stages of social interaction.

Prior to and in anticipation of interaction, people seek and more thoroughly engage with information and people with self-views expected to be consistent with their self-views. For example, they spend more time reading statements about themselves that they expect to be consistent with their self-views — even if those particular self-views are negative.

During interaction, people behave in ways that elicit views of them from others that are consistent with their self-views. This is especially true when their self-views are being challenged, say because someone expresses a positive view of an aspect of a person who sees that aspect of themselves negatively. People can “go out of their way” to behave in ways that elicit negative self-views. On the other hand, people can change their self-views and their behavior to match the expectations of others; this primarily happens when a person’s view of a particular aspect of themselves is one they do not regard as certain.

After interaction, people better remember expressions of others’ views of them that are consistent with their own. They also can think about others’ views that were inconsistent in ways that construe them as non-conflicting. On the long term, people gravitate to others’ — including friends and spouses — who view them as they view themselves. Likewise, people seem to push away others who have different views of them.

Do people self-verify in interacting with computers?

Given that people engage in this array of self-verification strategies in interactions with other people, we might expect that they would do the same in interacting with computers, including mobile phones, on-screen agents, voices, and services.

One reason to think that people do self-verify in human–computer interaction is that people respond to computers in a myriad of social ways: people reciprocate with computers, take on computers as teammates, treat computer personalities like human personalities, etc. (for a review see Nass & Moon, 2000). So I expect that people use these same strategies when using interactive technologies — including personal computers, mobile phones, robots, cars, online services, etc.

While empirical research should be carried out to test this basic, well-motivated hypothesis, there is further excitement and importance to the broader implications of this idea and its connections to how people understand new technological systems.

When systems models users

Since the 1980s, it has been quite common for system designers to think about the mental models people have of systems — and how these models are shaped by factors both in and out of the designer’s control (Gentner & Stevens, 1983). A familiar goal has been to lead people to a mental model that “matches” a conceptual model developed by the designer and is approximately equivalent to a true system model as far as common inputs and outputs go.

Many interactive systems develop a representation of their users. So in order to have a good mental model of these systems, people must represent how the system views them. This involves many of the same trade-offs considered above.

These considerations point out some potential problems for such systems. Technologists sometimes talk about the ability to provide serendipitous discovery. Quantifying aspects of one’s own life — including social behavior (e.g., Kass, 2007) and health — is a current trend in research, product development, and DIY and self-experimentation. While sometimes this collected data is then analyzed by its subject (e.g., because the subject is a researcher or hacker who just wants to dig into the data), to the extend that this trend will go mainstream, it will require simplification by building and presenting readily understandable models and views of these systems’ users.

The use of self-verification strategies and behavioral confirmation when interacting with computer systems — not only with people — thus presents a challenge to the ability of such systems to find users who are truly open to self-discovery. I think many of these same ideas apply equally to context-aware services on mobile phones and services that models one’s social network (even if they don’t  present that model outright).

Social responses or more general confirmation bias

That people may self-verify with computers as well as people raises a further question about both self-verification theory and social responses to communication technologies theory (aka the “Media Equation”). We may wonder just how general these strategies and responses are: are these strategies and responses distinctively social?

Prior work on self-verification has left open the degree to which self-verification strategies are particular to self-views, rather than general to all relatively important and confident beliefs and attitudes. Likewise, it is unclear to what extent all experiences, rather than just social interaction (including reading statements written or selected by another person), that might challenge or confirm a self-view are subject to these self-verification strategies.

Inspired by Velleman’s description above, we can think that it is just that other’s views of us have an dangerous potential to result in an explosion of the complexity of the world we need to model (“I have a way of interpreting myself, a way that I want you to interpret me, a way that I think you do interpret me, a way that I think you suspect me of wanting you to interpret me, a way that I think you suspect me of thinking you do interpret me, and so on”). Thus, if other systems can prompt this same regress, then the same frugality with our cognitions should lead to self-verification and behavioral confirmation. This is a reminder that treating media like real life, including treating computers like people, is not clearly non-adaptive (contra Reeves & Nass, 1996) or maladaptive (contra Lee, 2004).


Gentner, D., & Stevens, A. L. (1983). Mental Models. Lawrence Erlbaum Associates.

Kass, A. (2007). Transforming the Mobile Phone into a Personal Performance Coach. In B. J. Fogg & D. Eckles (Eds.), Mobile Persuasion: 20 Perspectives on the Future of Behavior Change. Stanford Captology Media.

Lee, K. M. (2004). Why Presence Occurs: Evolutionary Psychology, Media Equation, and Presence. Presence: Teleoperators & Virtual Environments, 13(4), 494-505. doi: 10.1162/1054746041944830.

Nass, C., & Moon, Y. (2000). Machines and Mindlessness: Social Responses to Computers. Journal of Social Issues, 56(1), 81-103.

Reeves, B., & Nass, C. (1996). The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places. Cambridge University Press.

Snyder, M., & Stukas, A. A. (1999). Interpersonal processes: The interplay of cognitive, motivational, and behavioral activities in social interaction. Annual Review of Psychology, 50(1), 273-303.

Snyder, M., & Swann, W. B. (1978). Behavioral confirmation in social interaction: From social perception to social reality. Journal of Experimental Social Psychology, 14(2), 148-62.

Swann, W. B., & Read, S. J. (1981). Self-verification processes: How we sustain our self-conceptions. Journal of Experimental Social Psychology, 17(4), 351-372. doi: 10.1016/0022-1031(81)90043-3

Velleman, J.D. (2009). How We Get Along. Cambridge University Press. The draft I quote is available from

Respond to this post