In the film The Descendents, George Clooney’s character Matt King wrestles — sometimes comically — with new and old choices involving his family and Hawaii. In one case, King decides he wants to meet a rival, both just to meet him and to give him some news; that is, he (at least explicitly) has generally good reason to meet him. Perhaps he even ought to meet him. When he actually does meet him, he cannot just do these things, he also argues with his rival, etc. King’s unplanned behaviors end up causing his rival considerable trouble.1
This struck me as related to some challenges in formulating what one should do — that is, in the “practical reasoning” side of ethics.
One way of getting practical advice out of virtue ethics is to say that one should do what the virtuous person would do in this situation. On its face, this seems right. But there are also some apparent counterexamples. Consider a short-tempered tennis player who has just lost a match.2 In this situation, the virtuous person would walk over to his opponent, shake his hand, and say something like “Good match.” But if this player does that, he is likely to become enraged and even assault his victorious opponent. So it seems better for him to walk off the court without attempting any of this — even though this is clearly rude.
The simple advice to do what the virtuous person would do in the present situation is, then, either not right or not so simple. It might be right, but not so simple to implement, if part of “the present situation” is one’s own psychological weaknesses. Aspects of the agent’s psychology — including character flaws — seem to license bad behavior and to remove reasons for taking the “best” actions.
King and other characters in The Descendents face this problem, both in the example above and at some other points in the movie. He begins a course of action (at least in part) because this is what the virtuous person would do. But then he is unable to really follow through because he lacks the necessary virtues.3 We might take this as a reminder of the ethical value to being humble — to account for our faults — when reasoning about what we ought to do.4 It is also a reminder of how frustrating this can be, especially when one can imagine (and might actually be able to) following through on doing what the virtuous person would do.
One way to cope with these weaknesses is to leverage other aspects of one’s situation. We can make public commitments to do the virtuous thing. We can change our environment, sometimes by binding our future selves, like Ulysses, from acting on our vices once we’ve begun our (hopefully) virtuous course of action. Perhaps new mobile technologies will be a substantial help here — helping us intervene in our own lives in this way.
- Perhaps deserved trouble. But this certainly didn’t play a stated role in the reasoning justifying King’s decision to meet him. [↩]
- This example is first used by Gary Watson (“Free Agency”, 1975) and put to this use by Michael Smith in his “Internalism” (1995). Smith introduces it as a clear problem for the “example” model of how what a virtuous person would do matters for what we should each do. [↩]
- Another reading of some of these events in The Descendents is that these characters actually want to do the “bad behaviors”, and they (perhaps unconciously) use their good intentions to justify the course of action that leads to the bad behavior. [↩]
- Of course, the other side of such humility is being short on self-efficacy. [↩]
Are the conditions required to assert something conventions? Can they be formalized? Donald Davidson on whether convention is foundational to communication:
But Frege was surely right when he said, “There is no word or sign in language whose function is simply to assert something.” Frege, as we know, set out to rectify matters by inventing such a sign, the turnstile ⊢’ [sometimes called Frege’s ‘judgment stroke’ or ‘assertion sign’]. And here Frege was operating on the basis of a sound principle: if there is a conventional feature of language, it can be made manifest in the symbolism. However, before Frege invented the assertion sign he ought to have asked himself why no such sign existed before. Imagine this: the actor is acting a scene in which there is supposed to be a fire. (Albee’s Tiny Alice, for example.) It is his role to imitate as persuasively as he can a man who is trying to warn others of a fire. “Fire!” he screams. And perhaps he adds, at the behest of the author, “I mean it! Look at the smoke!” etc. And now a real fire breaks out, and the actor tries vainly to warn the real audience. “Fire!” he screams, “I mean it! Look at the smoke!” etc. If only he had Frege’s assertion sign.
It should be obvious that the assertion sign would do no good, for the actor would have used it in the first place, when he was only acting. Similar reasoning should convince us that it is no help to say that the stage, or the proscenium arch, creates a conventional setting that negates the convention of assertion. For if that were so, the acting convention could be put into symbols also; and of course no actor or director would use it. The plight of the actor is always with us. There is no known, agreed upon, publically recognizable convention for making assertions. Or, for that matter, giving orders, asking questions, or making promises. These are all things we do, often successfully, and our success depends in part on our having made public our intention to do them. But it was not thanks to a convention that we succeeded.1
- Davidson, Donald. (1984). Communication and convention. Synthese 59 (1), 3-17. [↩]
Alex Tabarrok at Marginal Revolution blogs about how some ideas seem notably behind their time:
We are all familiar with ideas said to be ahead of their time, Babbage’s analytical engine and da Vinci’s helicopter are classic examples. We are also familiar with ideas “of their time,” ideas that were “in the air” and thus were often simultaneously discovered such as the telephone, calculus, evolution, and color photography. What is less commented on is the third possibility, ideas that could have been discovered much earlier but which were not, ideas behind their time.
In comparing ideas behind and ahead of their times, it’s worth considering the processes that identify them as such.
In the case of ideas ahead of their time, we rely on records and other evidence of their genesis (e.g., accounts of the use of flamethrowers at sea by the Byzantines ). Later users and re-discoverers of these ideas are then in a position to marvel at their early genesis. In trying to see whether some idea qualifies as ahead of its time, this early genesis, lack or use or underuse, followed by extensive use and development together serve as evidence for “ahead of its time” status.
On the other hand, in identifying ideas behind their time, it seems that we need different sorts of evidence. Taborrok uses the standard of whether their fruits could have been produced a long time earlier (“A lot of the papers in say experimental social psychology published today could have been written a thousand years ago so psychology is behind its time”). We need evidence that people in a previous time had all the intellectual resources to generate and see the use of the idea. Perhaps this makes identifying ideas behind their time harder or more contentious.
Y(X = x) and P(Y | do(x))
Perhaps formal causal inference — and some kind of corresponding new notation, such as Pearl’s do(x) operator or potential outcomes — is an idea behind its time.1 Judea Pearl’s account of the history of structural equation modeling seems to suggest just this: exactly what the early developers of path models (Wright, Haavelmo, Simon) needed was new notation that would have allowed them to distinguish what they were doing (making causal claims with their models) from what others were already doing (making statistical claims).2
In fact, in his recent talk at Stanford, Pearl suggested just this — that if the, say, the equality operator = had been replaced with some kind of assignment operator (say, :=), formal causal inference might have developed much earlier. We might be a lot further along in social science and applied evaluation of interventions if this had happened.
This example raises some questions about the criterion for ideas behind their time that “people in a previous time had all the intellectual resources to generate and see the use of the idea” (above). Pearl is a computer scientist by training and credits this background with his approach to causality as a problem of getting the formal language right — or moving between multiple formal languages. So we may owe this recent development to comfort with creating and evaluating the qualities of formal languages for practical purposes — a comfort found among computer scientists. Of course, e.g., philosophers and logicians also have been long comfortable with generating new formalisms. I think of Frege here.
So I’m not sure whether formal causal inference is an idea behind its time (or, if so, how far behind). But I’m glad we have it now.
- There is a “lively” debate about the relative value of these formalisms. For many of the dense causal models applicable to the social sciences (everything is potentially a confounder), potential outcomes seem like a good fit. But they can become awkward as the causal models get complex, with many exclusion restrictions (i.e. missing edges). [↩]
- See chapter 5 of Pearl, J. (2009). Causality: Models, Reasoning and Inference. 2nd Ed. Cambridge University Press. [↩]
Social psychologists like to write about attitudes. In fact, following Allport (1935), many of them have happily commented that the attitude is the most central and indispensable construct in social psychology (e.g., Petty, Wegener, Fabrigar, 1997). Here is a standard definition of an attitude: an attitude is
a psychological tendency that is expressed by evaluating a particular entity with some degree of favor or disfavor. (Eagly & Chaiken, 2007, p. 598)
A somewhat more specific view has it that attitudes are
associations between a given object and a given summary evaluation of the object — associations that can vary in strength and, hence, in their accessibility from memory. (Fazio, 2007, p. 608)
Attitudes are also supposed to be important for predicting behavior, though the attitude–behavior link is the subject of a great deal of controversy, which I can’t fully treat here. An extreme, design-oriented view is expressed by a B.F. Skinner-channeling B.J. Fogg:
While Fogg isn’t representative of mainstream, contemporary social psychology, similarly skeptical thoughts are expressed by investigators like Schwartz (2007). On the other hand, one common view of the attitude–behavior link is that it is quite strong (Kraus, 1997), but that (a) many research methods fail to measure attitudes and behaviors with regard to the same entities (Ajzen & Fishbein, 1977) and (b) this link is an important empirical subject, not built into the attitude construct by definition (Fazio, 2007; Zanna & Rempel, 1988).
I’ll set aside for now just how useful attitudes are for predicting behavior. But what should we make of this construct? That is, should we keep it around? Do we expect something like social psychology’s attitudes to be part of a mature science of human behavior?
Maybe I’m a sucker for a good slogan, but when I read psychologists’ on attitudes, I think of Quine’s slogan: no entity without identity. That is, we shouldn’t posit objects that don’t have identity conditions — the conditions under which we say that X and Y are the same object.
This slogan, followed strictly in everyday life, can get tricky: a restaurant changes owners and name — is it the same restaurant? But it is pretty compelling when it comes to the entities we use in science. Of course, philosophers have debated this slogan — and many particular proposed cases of posited entities lacking identity conditions (e.g., entities in quantum physics) — so I’ll leave it that lacking identity conditions might vary in how much trouble it causes for a theory that uses such entities.
What I do want to comment on is how strikingly social psychology’s attitudes lack good identity conditions — and thus have no good way of being individuated. While we might think this doesn’t cause much trouble in this case (as I just noted), I actually think it creates a whole family of pseudo-problems that psychologists spend their time on and build theories around.
First, evidence that there is trouble in individuating attitudes: As is clear from the definition of an attitude provided above, attitudes are supposed to be individuated by their object:
This evaluative responding is directed to some entity or thing that is its object—that is, we may evaluate a person (George W. Bush), a city (Chicago), an ideology (conservatism), and a myriad of other entities. In the language of social psychology, an entity that is evaluated is known as an attitude object. Anything that is discriminable or held in mind, sometimes below the level of conscious awareness, can be evaluated and therefore can function as an attitude object. Attitude objects may be abstract (e.g., liberalism, religious fiindamentalism) or concrete (e.g., the White House, my green raincoat) as well as individual (e.g., Condoleezza Rice, my sister-in-law) or collective (e.g., undocumented workers, European nations). (Eagly & Chaiken, 2007, p. 584)
So, for example, I can have an attitude towards Obama. This attitude can then have internal structure, such that there are multiple evaluations involved (e.g., implicit and explicit). This seems pretty straightforward: it is at least somewhat clear when some cognitive structures share the Obama as object.1
But trouble is not far around the corner. Much discussion of attitudes involves attitudes objects that are abstract objects — like sets or classes of objects– embedded in a whole set of relationships. For example, I might have attitudes towards snakes, Blacks, or strawberry ice cream. And there isn’t any obvious way that the canonical class by which attitudes are to be individuated gets picked out. A person has evaluative responses to strawberry ice cream, Ben & Jerry’s brand ice cream, ice cream in general, the larger class of such foods (including frozen yogurt, gelato, “soft serve”), foods that cool one down when eaten, etc.
This doesn’t just work with ice cream. (Obama instantiates many properties and is a member of many relevant classes.)
At this point, you might be thinking, how does all this matter? Nothing hinges on whether X and Y are one attitude or two…2
The particular trouble on my mind is that social psychologists have actually introduced distinctions that make this individuation important. For example, Eagly & Chaiken (2007) make much of their distinction between intra-attitudinal and inter-attitudinal structure. They list different kinds of features each can have and use this distinction to tell different stories about attitude formation and maintenance. I’m not ready to give a full review of these kinds of cases in the literature, but I think this is a pretty compelling example of where it seems critical to have a good way of individuating attitudes if this theory is to work.
Maybe the deck was stacked against attitudes by my prior beliefs, but I’m not sure I see why they are a useful level of analysis distinct from associations embedded in networks or other, more general, knowledge structures.
What should we use in our science of human behavior instead?
I’m surprised to find myself recommending this, but what philosophers call propositional attitudes — attitudes towards propositions, which are something like what sentences/utterances express — seem pretty appealing. Of course, there has been a great deal of trouble individuating them (in fact, they are one of the kinds of entities Quine was so concerned about). But their individuation troubles aren’t quite so terrible as social psychology’s attitudes: a propositional attitude can involve multiple objects without trouble, and it is the propositional attitudes themselves that can then specify the relationships of these entities to other entities.
I’m far from sure that current theories of propositional attitudes are ready to be dropped in, unmodified, to work in empirical social psychology — Daniel Dennett has even warned philosophers to be wary of promoting propositional attitudes for use in cognitive science, since theory about them is in such a mess. But I do think we have reason to worry about the state of the attitude construct in theorizing by social psychologists.
Ajzen, I., & Fishbein, M. (1977). Attitude-Behavior Relations: A Theoretical Analysis and Review of Empirical Research. Psychological Bulletin, 84(5), 8–918.
Allport, G. W. (1935). Attitudes. In C. Murchison (Ed.), Handbook of Social Psychology (Vol. 2, pp. 798–844). Worcester, MA: Clark University Press.
Eagly, A. H., & Chaiken, S. (2007). The Advantages of an Inclusive Definition of Attitude. Social Cognition, 25(5), 582-602.
Fazio, R. H. (2007). Attitudes as object-evaluation associations of varying strength. Social Cognition, 25(5), 603-637.
Fodor, J. A. (1980). Methodological solipsism considered as a research strategy in cognitive psychology. Behavioral and Brain Sciences, 3(1), 63–73.
Kraus, S. J. (1995). Attitudes and the Prediction of Behavior: A Meta-Analysis of the Empirical Literature. Pers Soc Psychol Bull, 21(1), 58-75. doi: 10.1177/0146167295211007.
Petty, R. E., Wegener, D. T., & Fabrigar, L. R. (1997). Attitudes and Attitude Change. Annual Review of Psychology, 48(1), 609-647.
Quine, W.V.O. (1969). Speaking of Objects. Ontological Relativity and Other Essays. New York: Columbia University Press.
Schwarz, N. (2007). Attitude Construction: Evaluation in Context. Social Cognition, 25(5), 638-656.
Zanna, M. P., & Rempel, J. K. (1988). Attitudes: A new look at an old concept. The Social Psychology of Knowledge, 315–334.
- There is still plenty of room for trouble, but this will be common to many representational constructs. For example, there are the familiar problems of what attitudes Louis has towards Superman. Superman is Clark Kent, but it would be odd if this external fact (which Louis doesn’t know) should determine the structure of Louis’ mind. See Fodor (1980). [↩]
- You would likely be in good company, I’m guessing this is a thought that was running through the heads of many of the smart folks in the seminar, “Attitudes and Persuasion”, in which I rambled on about this issue two weeks ago. [↩]
People believe many things about themselves. Having an accurate view of oneself is valuable because it can be used to generate both expectations that will be fulfilled and plans that can be successfully executed. But in being cognitively limited agents, there is pressure for us humans to not only have accurate self-views, but to have efficient ones.
In his new book, How We Get Along, philosopher David Velleman puts it this way:
At one extreme, I have a way of interpreting myself, a way that I want you to interpret me, a way that I think you do interpret me, a way that I think you suspect me of wanting you to interpret me, a way that I think you suspect me of thinking you do interpret me, and so on, each of these interpretations being distinct from all the others, and all of them being somehow crammed into my self-conception. At the other extreme, there is just one interpretation of me, which is common property between us, in that we not only hold it but interpret one another as holding it, and so on. If my goal is understanding, then the latter interpretation is clearly preferable, because it is so much simpler while being equally adequate, fruitful, and so on. (Lecture 3)
That is, one way my self-views can be efficient representations is if they serve double duty as others’ views of me — if my self-views borrow from others’ views of me and if my models of others’ views of me likewise borrow from my self-views.
Sometimes this back and forth between my self-view and my understanding of how others’ view me can seem counter to self-interest. People behave in ways that confirm others’ expectations of them, even when these expectations are negative (Snyder & Swann, 1978, for a review see Snyder & Stukas, 1999). And people interact with other people in ways such that their self-views are not challenged by others’ views of them and their self-views can double as representations of the others’ views of them, even when this means taking other people as having negative views of them (Swann, 1981).
Self-verification and behavioral confirmation strategies
People use multiple different strategies for achieving a match between their self-views and others’ view of them. These strategies come in at different stages of social interaction.
Prior to and in anticipation of interaction, people seek and more thoroughly engage with information and people with self-views expected to be consistent with their self-views. For example, they spend more time reading statements about themselves that they expect to be consistent with their self-views — even if those particular self-views are negative.
During interaction, people behave in ways that elicit views of them from others that are consistent with their self-views. This is especially true when their self-views are being challenged, say because someone expresses a positive view of an aspect of a person who sees that aspect of themselves negatively. People can “go out of their way” to behave in ways that elicit negative self-views. On the other hand, people can change their self-views and their behavior to match the expectations of others; this primarily happens when a person’s view of a particular aspect of themselves is one they do not regard as certain.
After interaction, people better remember expressions of others’ views of them that are consistent with their own. They also can think about others’ views that were inconsistent in ways that construe them as non-conflicting. On the long term, people gravitate to others’ — including friends and spouses — who view them as they view themselves. Likewise, people seem to push away others who have different views of them.
Do people self-verify in interacting with computers?
Given that people engage in this array of self-verification strategies in interactions with other people, we might expect that they would do the same in interacting with computers, including mobile phones, on-screen agents, voices, and services.
One reason to think that people do self-verify in human–computer interaction is that people respond to computers in a myriad of social ways: people reciprocate with computers, take on computers as teammates, treat computer personalities like human personalities, etc. (for a review see Nass & Moon, 2000). So I expect that people use these same strategies when using interactive technologies — including personal computers, mobile phones, robots, cars, online services, etc.
While empirical research should be carried out to test this basic, well-motivated hypothesis, there is further excitement and importance to the broader implications of this idea and its connections to how people understand new technological systems.
When systems models users
Since the 1980s, it has been quite common for system designers to think about the mental models people have of systems — and how these models are shaped by factors both in and out of the designer’s control (Gentner & Stevens, 1983). A familiar goal has been to lead people to a mental model that “matches” a conceptual model developed by the designer and is approximately equivalent to a true system model as far as common inputs and outputs go.
Many interactive systems develop a representation of their users. So in order to have a good mental model of these systems, people must represent how the system views them. This involves many of the same trade-offs considered above.
These considerations point out some potential problems for such systems. Technologists sometimes talk about the ability to provide serendipitous discovery. Quantifying aspects of one’s own life — including social behavior (e.g., Kass, 2007) and health — is a current trend in research, product development, and DIY and self-experimentation. While sometimes this collected data is then analyzed by its subject (e.g., because the subject is a researcher or hacker who just wants to dig into the data), to the extend that this trend will go mainstream, it will require simplification by building and presenting readily understandable models and views of these systems’ users.
The use of self-verification strategies and behavioral confirmation when interacting with computer systems — not only with people — thus presents a challenge to the ability of such systems to find users who are truly open to self-discovery. I think many of these same ideas apply equally to context-aware services on mobile phones and services that models one’s social network (even if they don’t present that model outright).
Social responses or more general confirmation bias
That people may self-verify with computers as well as people raises a further question about both self-verification theory and social responses to communication technologies theory (aka the “Media Equation”). We may wonder just how general these strategies and responses are: are these strategies and responses distinctively social?
Prior work on self-verification has left open the degree to which self-verification strategies are particular to self-views, rather than general to all relatively important and confident beliefs and attitudes. Likewise, it is unclear to what extent all experiences, rather than just social interaction (including reading statements written or selected by another person), that might challenge or confirm a self-view are subject to these self-verification strategies.
Inspired by Velleman’s description above, we can think that it is just that other’s views of us have an dangerous potential to result in an explosion of the complexity of the world we need to model (“I have a way of interpreting myself, a way that I want you to interpret me, a way that I think you do interpret me, a way that I think you suspect me of wanting you to interpret me, a way that I think you suspect me of thinking you do interpret me, and so on”). Thus, if other systems can prompt this same regress, then the same frugality with our cognitions should lead to self-verification and behavioral confirmation. This is a reminder that treating media like real life, including treating computers like people, is not clearly non-adaptive (contra Reeves & Nass, 1996) or maladaptive (contra Lee, 2004).
Kass, A. (2007). Transforming the Mobile Phone into a Personal Performance Coach. In B. J. Fogg & D. Eckles (Eds.), Mobile Persuasion: 20 Perspectives on the Future of Behavior Change. Stanford Captology Media.
Lee, K. M. (2004). Why Presence Occurs: Evolutionary Psychology, Media Equation, and Presence. Presence: Teleoperators & Virtual Environments, 13(4), 494-505. doi: 10.1162/1054746041944830.
Nass, C., & Moon, Y. (2000). Machines and Mindlessness: Social Responses to Computers. Journal of Social Issues, 56(1), 81-103.
Reeves, B., & Nass, C. (1996). The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places. Cambridge University Press.
Snyder, M., & Stukas, A. A. (1999). Interpersonal processes: The interplay of cognitive, motivational, and behavioral activities in social interaction. Annual Review of Psychology, 50(1), 273-303.
Snyder, M., & Swann, W. B. (1978). Behavioral confirmation in social interaction: From social perception to social reality. Journal of Experimental Social Psychology, 14(2), 148-62.
Swann, W. B., & Read, S. J. (1981). Self-verification processes: How we sustain our self-conceptions. Journal of Experimental Social Psychology, 17(4), 351-372. doi: 10.1016/0022-1031(81)90043-3
Velleman, J.D. (2009). How We Get Along. Cambridge University Press. The draft I quote is available from http://ssrn.com/abstract=1008501